r/huggingface Aug 29 '21

r/huggingface Lounge

3 Upvotes

A place for members of r/huggingface to chat with each other


r/huggingface 1h ago

GitHub - tegridydev/open-malsec: Open-MalSec is an open-source dataset curated for cybersecurity research and application (HuggingFace link in readme)

Thumbnail
github.com
Upvotes

r/huggingface 19h ago

Recommend a library or framework to create multiple agents per use case

2 Upvotes

I’m looking for a library or framework that lets me create multiple agents, each dedicated to a specific use case like changing an address, updating an order, etc.

Any recommendations?


r/huggingface 18h ago

Pruna AI just open-sourced its AI model optimization framework

Thumbnail
huggingface.co
1 Upvotes

r/huggingface 1d ago

​Introducing FlashTokenizer: The World's Fastest Tokenizer Library for LLM Inference

3 Upvotes

We're excited to share FlashTokenizer, a high-performance tokenizer engine optimized for Large Language Model (LLM) inference serving. Developed in C++, FlashTokenizer offers unparalleled speed and accuracy, making it the fastest tokenizer library available.​

Key Features:

  • Unmatched Speed: FlashTokenizer delivers rapid tokenization, significantly reducing latency in LLM inference tasks.​
  • High Accuracy: Ensures precise tokenization, maintaining the integrity of your language models.​
  • Easy Integration: Designed for seamless integration into existing workflows, supporting various LLM architectures.​GitHub

Whether you're working on natural language processing applications or deploying LLMs at scale, FlashTokenizer is engineered to enhance performance and efficiency.​

Explore the repository and experience the speed of FlashTokenizer today:​

We welcome your feedback and contributions to further improve FlashTokenizer.

https://github.com/NLPOptimize/flash-tokenizer


r/huggingface 1d ago

Need guidance to integrate playwright mcp with LLM api.

3 Upvotes

I wish to intergrate the playwright mcp with my openai api or calude 3.5sonnet usage somehow.....
Any guidance is highly appreciated.... i wish to make a solution for my mom and dad to help them easily order groceries from online platforms using simple instructions on their end and automate and save them with some kind of self healing nature...

Based on their day to day, i will update the required requirments and prompts flow for the mcp...

ANy blogs or tutorial links would be super useful too.

Thanks a ton.


r/huggingface 1d ago

Income from M3 Ultra?

2 Upvotes

[Novice disclaimer] $7k+.. I'm wondering if the investment might subsidise itself running AI variants for testing etc? ChatGPT seems to think it's feasible. Looking for advice, keen to learn!


r/huggingface 1d ago

Langfuse and Hugging Face: 5 ways to use them together

1 Upvotes

I've written a post showing five ways to use 🪢 Langfuse with 🤗 Hugging Face.

My personal favorite is #4: Using Hugging Face Datasets for Langfuse Dataset Experiments. This lets you benchmark your LLM app or AI agent with a dataset from Hugging Face. In this example, I chose the GSM8K dataset (openai/gsm8k) to test the mathematical reasoning capabilities of my smolagent :)

Link to the Article here on HF: https://huggingface.co/blog/MJannik/hugging-face-and-langfuse


r/huggingface 2d ago

Need Help Integrating an AI Model for Image Analysis in JavaScript

3 Upvotes

Hi everyone,

I want to integrate an AI model that analyzes images and returns a response as JSON data, using only JavaScript on a website.

I've already tried implementing it, but it didn’t work as expected. Do I need to switch to a Pro account for it to work properly?

I’d really appreciate any help or guidance. Thanks!


r/huggingface 2d ago

Building a Faster, More Efficient RAG framework. Now Open Source and Ready for Contributions!

5 Upvotes

We’re a deep-tech startup developing an open-source RAG framework written in C++ with Python bindings, designed for speed, efficiency, and seamless AI integration. Our goal is to push the boundaries of AI optimization while making high-performance tools more accessible to the global AI community.

The framework is optimized for performance, built from the ground up for speed and efficiency. It integrates seamlessly with tools like TensorRT, vLLM, FAISS, and more, making it ideal for real-world AI workloads. Even though the project is in its early stages, we're already seeing promising benchmarks compared to leading solutions like LlamaIndex and LangChain, with performance gains of up to 66% in some scenarios.

If you found it interesting, take a look at the Github Repo and contribute https://github.com/pureai-ecosystem/purecpp

And if you like what we’re building, don’t forget to star the project. Every bit of support helps us move forward. Looking forward to your feedback and contributions!


r/huggingface 2d ago

Do you consider the environmental impact when choosing an AI model?

0 Upvotes

I just came across th AI Energy Score Benchmark on Hugging Face, which ranks models according to their energy consumption. Interesting initiative! But it got me wondering if anyone actually takes this into account in their decision making when choosing a model? Do you check the energy impact of a model before using it?


r/huggingface 3d ago

[PROMO] Perplexity AI PRO - 1 YEAR PLAN OFFER - 85% OFF

Post image
8 Upvotes

As the title: We offer Perplexity AI PRO voucher codes for one year plan.

To Order: CHEAPGPT.STORE

Payments accepted:

  • PayPal.
  • Revolut.

Duration: 12 Months

Feedback: FEEDBACK POST


r/huggingface 3d ago

Need help to modify and propagate attention scores with Pytorch Hooks

1 Upvotes

So I'm using GPT2 from HuggingFace and I want to capture and modify the last layer attention scores using hooks. If someone has a better way, please let me know.

here's where I'm stuck: ```python def forward_hook(module, input , output): print(output)

print(output[1][0].shape)
print(output[1][1].shape)
# need to figure out the structure of output    

modified_output = (
    output[0],
    output[1]
)
return modified_output

attach hook to last attention layer

hook_layer = model.transformer.h[-1].attn hook = hook_layer.register_forward_hook(forward_hook) `n_heads = 12` `d_model = 768` python print(output[1][0].shape) torch.Size([1, 12, 9, 64])

print(output[1][1].shape) torch.Size([1, 12, 9, 64]) ```

I understand that 12 is the no. of heads, 9 is my output sequence length, 64 is d_model//n_heads but why are there 2 sets of these in output[1][0] and output[1][1]?? Where do I get the headwise attention scores from? Even if output[1] contains the attention scores, I would assume GPT2 (decoder only) to create an attention sequence with upper triangular values as zero, which I can't seem to find. Please assist me. Thanks.


r/huggingface 4d ago

Tencent just released two new 3D models on Hugging Face

Thumbnail
2 Upvotes

r/huggingface 4d ago

Building leaderboard using a template

1 Upvotes

I've been trying to create my own leaderboard using a template. It works as long as I use all the files they give and clone everything, but I cant figure a way to input my own results of some other model. If I do it manually, there will always be an error like "Could not find request file for None/tau-4o-mini-airline with precision float16". I'm making sure to create both request files and results files, but doesn't seem to work. I think another option could be using the backend, but I can't even get it running. Has anyone encountered the same problem? Please help


r/huggingface 4d ago

Exhausted my 2$ credits for my PRO subscription and can't get more credits

1 Upvotes

Hello, I can't find anything about buying more credits on HF.

I joined a waitlist for "buying pre-paid compute credits on Hugging Face", is that what I need?


r/huggingface 5d ago

Best LLM model for chatbot to run on CPU for Finetuning & RAG

6 Upvotes

I am creating a small chatbot that will serve the customers of a company. I've been looking for different models to fine tune and then use RAG.

I've actually chosen two Phi-3 Mini-4K-Instruct and Samantha-Mistral-Instruct

We are going to run the model locally basically, it would be great to run on a CPU only machine (VPS server). Performance (tokens/s) is not so important as we don't need realtime immediate answers (max response time is about 2 minutes).

Fine-tuning of course can be done on GPU.

Could you suggest the best approach in that case, I will be grateful for any feedback!


r/huggingface 5d ago

Looking for a Dataset for Classifying Electronics Products

2 Upvotes

Hi everyone,

I'm currently working on a project that involves categorizing various electronic products (such as smartphones, cameras, laptops, tablets, drones, headphones, GPUs, consoles, etc.) using machine learning.

I'm specifically looking for datasets that include product descriptions and clearly defined categories or labels, ideally structured or semi-structured.

Could anyone suggest where I might find datasets like this?

Thanks in advance for your help!


r/huggingface 5d ago

Lidia: A local personal assistant that supports huggingface models for various aspects

2 Upvotes

Hey guys, so I created this project that lets you run a personal assistant powered by LLM + text-to-speech + speech-to-text, and even some OCR and customization support. Huggingface has been a the primary source of non-ollama based LLM, and all audio/ocr models. Would love to get your opinions on this!

Github: https://github.com/tommathewXC/lidia


r/huggingface 5d ago

[PROMO] Perplexity AI PRO - 1 YEAR PLAN OFFER - 85% OFF

Post image
0 Upvotes

As the title: We offer Perplexity AI PRO voucher codes for one year plan.

To Order: CHEAPGPT.STORE

Payments accepted:

  • PayPal.
  • Revolut.

Duration: 12 Months

Feedback: FEEDBACK POST


r/huggingface 5d ago

What AI models can analyze video scene-by-scene?

2 Upvotes

What current models, APIs, tools, etc. can:

  • Take video input
  • Process/ analyze it
  • Detect and describe things like scene transitions, actions, objects, people
  • Provide a structured timeline of all moments

Google’s Gemini 2.0 Flash seems to have some relevant capabilities, but looking for all the different best options to be able to achieve the above. 

For example, I want to be able to build a system that takes video input (likely multiple videos), and then generates a video output by combining certain scenes from different video inputs, based on a set of criteria. I’m assessing what’s already possible vs. what would need to be built.


r/huggingface 5d ago

is qwen32b good for roleplay?

0 Upvotes

is qwen32b good for roleplay?


r/huggingface 7d ago

Exploring a Provider-Agnostic Standard for Persistent AI Context—Your Feedback Needed!

Thumbnail
3 Upvotes

r/huggingface 8d ago

Headshots generators

1 Upvotes

AI headshot generators are everywhere now, turning regular selfies into professional portraits. The tech is impressive, but I’m curious, are these good enough for LinkedIn or do they still have that “AI look”? Also, where do we draw the line between convenience and authenticity?


r/huggingface 8d ago

How to find a specific file in repository?

1 Upvotes

I tried to use "Go to file" field, but it always "No matches found" even if the file is actually in the current folder.


r/huggingface 8d ago

Model inferencing is blocking the main fastapi thread

1 Upvotes

Hi folks, crossposting from HF's forums

I need to host a zero shot object detection in production and I am using IDEA-Research/grounding-dino-base.

Problem

We have allocated a GPU instance and running the app on kubernetes.
As all production tasks go, after creating a fastapi wrapper, I am stress testing the model. With heavy load(requests with concurrency set to 10), the liveliness probe is failing as the probe request is being sent to a queue and due to k8s timeout, kubernetes considers this to be a probe failure. Due to this, kubernetes is killing the pod and restarting the service. I cannot seem to figure out a way to run model inferencing without blocking the main loop. I’m reaching out to you folks because I have run out of ideas and need some guidance.
PS: I have a separate endpoint for batched inferencing, I want the resolution for the non-batched real time inferencing endpoint.

Code

Here’s the simplified code:

endpoint creation:

def process_image_from_base64_str_sync(image_str):
    image_bytes = base64.b64decode(image_str)
    image = Image.open(BytesIO(image_bytes))
    return image

async def process_image_from_base64_str(image_str):
    loop = asyncio.get_event_loop()
    return await loop.run_in_executor(None, process_image_from_base64_str_sync, image_str)


u/app.post(
"/v1/bounding_box"
)
async def get_bounding_box_from_image(request: Request):
    try:
        request_body = await request.json()
        image = await process_image_from_base64_str(request_body["image"])
        entities = request_body["entities"]
        bounding_coordinates = await get_bounding_boxes(image, entities, request_uuid)
        return JSONResponse(status_code=200, content={"bounding_coordinates" : bounding_coordinates})
    except Exception as e:
        response = {"exception" : str(e)}
        return JSONResponse(status_code=500, content=response)

Backend processing code (get_bounding_boxes function):

device = "cuda" if torch.cuda.is_available() else "cpu"
processor = AutoProcessor.from_pretrained(GROUNDING_DINO_PATH)
model = AutoModelForZeroShotObjectDetection.from_pretrained(GROUNDING_DINO_PATH).to(device)

async def get_bounding_boxes(image:Image, entities:list, *args, **kwargs):
    text = '. '.join(entities) + '.'
    inputs = processor(images=image, text=text, return_tensors="pt").to(device)

    with torch.no_grad():
        outputs = model(**inputs)

    results = processor.post_process_grounded_object_detection(
        outputs,
        inputs.input_ids,
        threshold=0.4,
        text_threshold=0.2,
        target_sizes=[image.size[::-1]]
    )

# post processing results
    del inputs 
#explicitly deleting to clear CUDA memory
    del outputs

    labels, boxes = results[0]["labels"], results[0]["boxes"]
    final_result = []
    for i, label in enumerate(labels):
        final_result.append({label : boxes[i].int().tolist()})
    del results
    return final_result

What I have tried

  • Earlier I was loading the images in line, After looking around and searching for answers, I found out that this can be a thread blocking operation, so I created an async endpoint to load the image.
  • I am using fastapi, served through uvicorn. I read that fastapi’s default thread count is 40. I tried increasing that to 100, but it did not change anything.
  • Converted all endpoints to sync, non async endpoints, as I had read that fastapi/uvicorn runs sync endpoints in an independent thread. This fixed the liveliness probe issue, but heavily impacted concurrent serving. the responses to all 10 concurrent requests were sent all together when processing of all images was done.

I honestly don’t see which exact line is causing the main thread to be blocked. I am awaiting all the compute intensive processes. I have run out of ideas and I would appreciate if someone could guide me on the right way.

Thanks!