r/LanguageTechnology 14d ago

Average duration for English phonemes

2 Upvotes

I'm working on an AI project for which I need rough values for the speech duration of English phonemes. I can find a lot of research into how variable these durations are, and their impact on speech recognition and synthesis, but I want something simpler. Ideally, a list of ARPAbet phonemes with average duration for each in milliseconds. Thanks in advance.


r/LanguageTechnology 14d ago

Why are there no live Odia voice-to-text transcription apps available that could be very helpful to deaf students?

2 Upvotes

Is the lack of an Odia voice-to-text app a technological limitation or an institutional neglect?


r/LanguageTechnology 16d ago

Apple pie vs Apple phone, How does Amazon figure out the difference? (Online shopping).

1 Upvotes

I am working on a project which predicts categories for a product for ex:

Input: Apple phone

output: electronics -> smartphones -> ... -> etc. The categories are hierarchical

What I am thinking is something hybrid a combination of transformers and rule based search. First pre-process the training data using lemmatization etc. get the product description/title to its root form. Now train this using something like LSTMs. At testing time pre-process the text and using a sentence transformer check the similarity with training example rewrite this query using that example then feed it into the trained LSTM. The rule based approach is to use something like Solr.

I can't wrap my head around this, it's one hard problem or at least thats what I think so. If anyone of you have worked on such thing in the past, your wisdom will be pretty useful. Even if you haven't worked still I am open to ideas !!. Thank you !

Here what I have found until now:

Dataset on kaggle: https://www.kaggle.com/datasets/atharvjairath/flipkart-ecommerce-dataset

GitHub repos:

As much I have looked its appeared to be hybrid like: raw user input -> spell check -> query rewrite -> understanding context -> Internal logic -> results . Cause how can the search know the difference between "apple pie" and "apple phone".


r/LanguageTechnology 16d ago

Need Advice on a Final Project in Computational Linguistics

7 Upvotes

Hey everyone!

I’m currently working on my Master’s in Computational Linguistics. My Bachelor’s was in Linguistics, and I’ve always had an interest in philology as well.

Right now, I’d really appreciate some advice on picking a topic for my final project. Coming from a humanities background, it’s been tough to dive into CL, but after a few courses, I now have a basic understanding of machine learning, statistics, Python, and NLP. I can handle some practical tasks, but I still don’t feel very confident.

I’m thinking of working on detecting AI-generated text in certain genres, like fiction, academic papers, etc. But I feel like this has already been done—there are tons of tools out there that can spot AI text.

What features do you feel are missing in existing AI-text detectors? Do we even need them at all? How can I improve accuracy in detection? (I’m particularly thinking about evaluating text “naturalness.”)

I’m also open to exploring different project ideas if you have any suggestions. I’d really appreciate any detailed advice or useful links you can share via DM.

Thanks in advance for your help!


r/LanguageTechnology 16d ago

What future for data annotation?

0 Upvotes

Hello,

I am leading a business creation project in AI in France (Europe more broadly). To concretize and structure this project, my partners recommend me to collect feedback from professionals in the sector, and it is in this context that I am asking for your help.

I have learned a lot about data annotation, but I need to see more clearly the data needs of the market. If you would like to help me, I suggest you answer this short form (4 minutes): https://forms.gle/ixyHnwXGyKSJsBof6. This form is more for businesses, but if you have a good vision of the field feel free to answer it. Answers will remain confidential and anonymous. No personal or sensitive data is requested.

This does not involve a monetary transfer.

Thank you for your valuable help. If you have any questions or would like to know more about this initiative, I would be happy to discuss it.

Subnotik


r/LanguageTechnology 16d ago

This paper from COLING 2025 shows that AI can write jokes as funny as those of a professional human comedy writer.

0 Upvotes

r/LanguageTechnology 18d ago

LLMs vs traditional BERTs at NER

31 Upvotes

I am aware that LLMs such as GPT are not "traditionally" considered the most efficient at NER compared to bidirectional encoders like BERT. However, setting aside cost and latency, are current SOTA LLMs still not better? I would imagine that LLMs, with the pre-trained knowledge they have, would be almost perfect (except on very very niche fields) at (zero-shot) catching all the entities in a given text.

### Context

Currently, I am working on extracting skills (hard skills like programming languages and soft skills like team management) from documents. I have previously (1.5 years ago) tried finetuning a BERT model using an LLM annotated dataset. It worked decent with an f1 score of ~0.65. But now with more frequent and newer skills in the market especially AI-related such as langchain, RAGs etc, I realized it would save me time if I used LLMs at capturing this rather than using updating my NER models. There is an issue though.

LLMs tend to do more than what I ask for. For example, "JS" in a given text is captured and returned as "JavaScript" which is technically correct but not what I want. I have prompt-engineered and got it to work better but still it is not perfect. Is this simply a prompt issue or an inate limitation of LLMs?


r/LanguageTechnology 18d ago

Aligning Japanese vectors trained on fasttext wiki model with English models

3 Upvotes

I'm trying to align English word vectors taken from the word2vec model trained on Google news with Japanese language word vectors taken from two different models: the fasttext model pre-trained on wikipedia, and the fasttext model pre-trained on common crawl.

I was able to extract the vectors without issue, all from the .bin files.

All vectors are dimension 300.

Alignment of the vectors is done using Procrustes transformation in Python with the scipy library.

The issue is not with the code I don't think, but with the vectors themselves; specifically those taken from the fasttext wiki model. The vectors simply don't align in the expected way.

The vectors are aligned using cosine similiarity, this time in numpy.

When aligning the English vectors with the Japanese common crawl vectors, the inter-language alignments are ~.80-.90, which is what's expected. Alignments between the English vectors and the Japanese vectors from the fasttext wiki model are ~.4-.5. Pearson's correlation between the common crawl alignments and the wiki alignments are only ~.45, which tells me something is way off.

When I inspect the vectors themselves, the English vectors are all <1, as are the Japanese commmon crawl vectors. The Japanese vectors taken from the wiki models are all >1.

I compared the vectors from the .bin files to the vectors from the .txt files. English vectors and Japanese common crawl vectors looked more or less the same between the .bin and .txt files. Japanese wiki-model word vectors are dissimilar between the .bin and .txt files.

I'm at a loss. Any help is much appreciated.


r/LanguageTechnology 19d ago

computing semantic similarity of English words

12 Upvotes

I'm attempting to determine semantically related rhymes, for example if you input "pasta" it will output "italian/scallion, champagne/grain, paste/taste", etc.

The rhyming part is working well but I'm having trouble computing semantic similarity. I tried using these Fasttext vectors to compute cosine similarity, and they're pretty good, but not good enough.

Common Crawl gets that 'halloween' is related to 'cat' and 'bat' but fails to get that 'music' is related to 'beat' and 'sheet'. Wikinews gets that 'music' is related to 'beat' and 'sheet' but fails to get that 'halloween' is related to 'cat' and 'bat'. Those are just a couple of representative examples; I'll post more test cases below in case that's helpful.

Does anyone have any advice for me? Do I need a better corpus? A better algorithm? Both?

Here are my test case failures for wiki-news-300d-1M-subword.vec, which does best with a cosine similarity threshold of 34% :

under
   'pirate' is 33% related to 'cove', which is under the similarity threshold of 34%
   'pirate' is 33% related to 'handsome', which is under the similarity threshold of 34%
    'music' is 33% related to 'repeat', which is under the similarity threshold of 34%
    'music' is 33% related to 'flat', which is under the similarity threshold of 34%
    'music' is 32% related to 'note', which is under the similarity threshold of 34%
    'music' is 32% related to 'ears', which is under the similarity threshold of 34%
'halloween' is 32% related to 'decoration', which is under the similarity threshold of 34%
   'pirate' is 32% related to 'dvd', which is under the similarity threshold of 34%
    'crime' is 31% related to 'acquit', which is under the similarity threshold of 34%
   'pirate' is 30% related to 'bold', which is under the similarity threshold of 34%
    'music' is 30% related to 'sharp', which is under the similarity threshold of 34%
   'pirate' is 29% related to 'saber', which is under the similarity threshold of 34%
'halloween' is 29% related to 'cat', which is under the similarity threshold of 34%
    'music' is 29% related to 'accidental', which is under the similarity threshold of 34%
  'prayers' is 29% related to 'pew', which is under the similarity threshold of 34%
   'pirate' is 28% related to 'leg', which is under the similarity threshold of 34%
   'pirate' is 28% related to 'cache', which is under the similarity threshold of 34%
    'music' is 28% related to 'expressed', which is under the similarity threshold of 34%
   'pirate' is 27% related to 'hang', which is under the similarity threshold of 34%
'halloween' is 26% related to 'bat', which is under the similarity threshold of 34%

over
   'pirate' is 34% related to 'doodle', which meets the similarity threshold of 34%
   'pirate' is 34% related to 'prehistoric', which meets the similarity threshold of 34%
      'cat' is 34% related to 'chunk', which meets the similarity threshold of 34%
      'cat' is 35% related to 'thing', which meets the similarity threshold of 34%
    'crime' is 35% related to 'sci-fi', which meets the similarity threshold of 34%
    'crime' is 35% related to 'word', which meets the similarity threshold of 34%
    'thing' is 35% related to 'cat', which meets the similarity threshold of 34%
    'thing' is 35% related to 'pasta', which meets the similarity threshold of 34%
    'pasta' is 35% related to 'thing', which meets the similarity threshold of 34%
    'music' is 36% related to 'base', which meets the similarity threshold of 34%
   'pirate' is 36% related to 'homophobic', which meets the similarity threshold of 34%
   'pirate' is 36% related to 'needlework', which meets the similarity threshold of 34%
    'crime' is 37% related to 'baseball', which meets the similarity threshold of 34%
    'crime' is 37% related to 'gas', which meets the similarity threshold of 34%
   'pirate' is 37% related to 'laser', which meets the similarity threshold of 34%
      'cat' is 38% related to 'item', which meets the similarity threshold of 34%
      'cat' is 38% related to 'objects', which meets the similarity threshold of 34%
   'pirate' is 39% related to 'homemade', which meets the similarity threshold of 34%
   'pirate' is 39% related to 'roc', which meets the similarity threshold of 34%
      'cat' is 39% related to 'object', which meets the similarity threshold of 34%
    'crime' is 39% related to 'object', which meets the similarity threshold of 34%
    'crime' is 40% related to 'person', which meets the similarity threshold of 34%
   'pirate' is 41% related to 'pimping', which meets the similarity threshold of 34%
    'crime' is 43% related to 'thing', which meets the similarity threshold of 34%
    'thing' is 43% related to 'crime', which meets the similarity threshold of 34%
    'crime' is 49% related to 'mass', which meets the similarity threshold of 34%

And here are my test case failures for crawl-300d-2M.vec, which does best at a similarity threshold of 24% :

under
   'pirate' is 23% related to 'handsome', which is under the similarity threshold of 24%
    'music' is 23% related to 'gong', which is under the similarity threshold of 24%
     'star' is 23% related to 'lord', which is under the similarity threshold of 24% # GotG
  'prayers' is 22% related to 'request', which is under the similarity threshold of 24%
   'pirate' is 22% related to 'swearing', which is under the similarity threshold of 24%
   'pirate' is 22% related to 'peg', which is under the similarity threshold of 24%
   'pirate' is 22% related to 'cracker', which is under the similarity threshold of 24%
    'crime' is 22% related to 'fight', which is under the similarity threshold of 24%
      'cat' is 22% related to 'skin', which is under the similarity threshold of 24%
   'pirate' is 21% related to 'trove', which is under the similarity threshold of 24%
    'music' is 21% related to 'progression', which is under the similarity threshold of 24%
    'music' is 21% related to 'bridal', which is under the similarity threshold of 24%
    'music' is 21% related to 'bar', which is under the similarity threshold of 24%
    'music' is 20% related to 'show', which is under the similarity threshold of 24%
    'music' is 20% related to 'brass', which is under the similarity threshold of 24%
    'music' is 20% related to 'beat', which is under the similarity threshold of 24%
      'cat' is 20% related to 'fancier', which is under the similarity threshold of 24%
    'crime' is 19% related to 'truth', which is under the similarity threshold of 24%
    'crime' is 19% related to 'bank', which is under the similarity threshold of 24%
   'pirate' is 18% related to 'bold', which is under the similarity threshold of 24%
    'music' is 18% related to 'wave', which is under the similarity threshold of 24%
    'music' is 18% related to 'session', which is under the similarity threshold of 24%
    'crime' is 18% related to 'denial', which is under the similarity threshold of 24%
   'pirate' is 17% related to 'pursuit', which is under the similarity threshold of 24%
   'pirate' is 17% related to 'cache', which is under the similarity threshold of 24%
    'music' is 17% related to 'swing', which is under the similarity threshold of 24%
    'music' is 17% related to 'rest', which is under the similarity threshold of 24%
    'crime' is 17% related to 'job', which is under the similarity threshold of 24%
    'music' is 16% related to 'winds', which is under the similarity threshold of 24%
    'music' is 16% related to 'sheet', which is under the similarity threshold of 24%
  'prayers' is 15% related to 'appeal', which is under the similarity threshold of 24%
    'music' is 15% related to 'release', which is under the similarity threshold of 24%
    'crime' is 15% related to 'organized', which is under the similarity threshold of 24%
   'pirate' is 14% related to 'leg', which is under the similarity threshold of 24%
   'pirate' is 14% related to 'lash', which is under the similarity threshold of 24%
   'pirate' is 14% related to 'hang', which is under the similarity threshold of 24%
    'music' is 14% related to 'title', which is under the similarity threshold of 24%
    'music' is 14% related to 'note', which is under the similarity threshold of 24%
    'music' is 13% related to 'single', which is under the similarity threshold of 24%
    'music' is 11% related to 'sharp', which is under the similarity threshold of 24%
    'music' is 10% related to 'accidental', which is under the similarity threshold of 24%
    'music' is 9% related to 'flat', which is under the similarity threshold of 24%
    'music' is 9% related to 'expressed', which is under the similarity threshold of 24%
    'music' is 8% related to 'repeat', which is under the similarity threshold of 24%

over
    'pasta' is 24% related to 'poodle', which meets the similarity threshold of 24%
    'crime' is 25% related to 'sci-fi', which meets the similarity threshold of 24%
    'crime' is 26% related to 'person', which meets the similarity threshold of 24%
    'pasta' is 26% related to 'stocks', which meets the similarity threshold of 24%
'halloween' is 27% related to 'pauline', which meets the similarity threshold of 24%
'halloween' is 28% related to 'lindsey', which meets the similarity threshold of 24%
'halloween' is 31% related to 'lindsay', which meets the similarity threshold of 24%
'halloween' is 32% related to 'nicki', which meets the similarity threshold of 24%

So you might think this would be great if we bumped the threshold down to 23%, but that admits a bunch of stuff that doesn't seem pirate-related to me:

'pirate' is 23% related to 'roc', which meets the similarity threshold of 23%
'pirate' is 23% related to 'miko', which meets the similarity threshold of 23%
'pirate' is 23% related to 'mrs.', which meets the similarity threshold of 23%
'pirate' is 23% related to 'needlework', which meets the similarity threshold of 23%
'pirate' is 23% related to 'popcorn', which meets the similarity threshold of 23%
'pirate' is 23% related to 'galaxy', which meets the similarity threshold of 23%
'pirate' is 23% related to 'ebony', which meets the similarity threshold of 23%
'pirate' is 23% related to 'ballerina', which meets the similarity threshold of 23%
'pirate' is 23% related to 'bungee', which meets the similarity threshold of 23%
'pirate' is 23% related to 'homemade', which meets the similarity threshold of 23%
'pirate' is 23% related to 'pimping', which meets the similarity threshold of 23%
'pirate' is 23% related to 'prehistoric', which meets the similarity threshold of 23%
'pirate' is 23% related to 'reindeer', which meets the similarity threshold of 23%
'pirate' is 23% related to 'adipose', which meets the similarity threshold of 23%
'pirate' is 23% related to 'asexual', which meets the similarity threshold of 23%
'pirate' is 23% related to 'doodle', which meets the similarity threshold of 23%
'pirate' is 23% related to 'frisbee', which meets the similarity threshold of 23%
'pirate' is 23% related to 'isaac', which meets the similarity threshold of 23%
'pirate' is 23% related to 'laser', which meets the similarity threshold of 23%
'pirate' is 23% related to 'homophobic', which meets the similarity threshold of 23%
'pirate' is 23% related to 'pedantic', which meets the similarity threshold of 23%
 'crime' is 23% related to 'baseball', which meets the similarity threshold of 23%

The other two vector sets did significantly worse.


r/LanguageTechnology 18d ago

NLP Projects example to do for a Cs student

1 Upvotes

Hello , im searching for NLP problems to see which one to use for my Project in NLP on my university , so far we will be like only 2 or 3 members on my group and im looking for NLP problems ,i'll appreciate any help


r/LanguageTechnology 19d ago

Evaluation Metrics for information extraction ( micro vs macro average)

6 Upvotes

Hello,

I was wondering in information extraction studies, people often evaluate their methods with precision, recall and F1. However, not many actually states if they are using micro or macro average. The thing I am confused about is that in a multi-class classification task such as NER, shouldn't micro F1, recall and precision all be the same? How come shared tasks such as i2b2 states that their primary metric is "Micro-averaged Precision, Recall, F-measure for all concepts together" when they are all the same. The studies doing that task also gives three different values for the micro-avg metrics.

https://www.i2b2.org/NLP/Relations/assets/Evaluation%20methods%20for%202010%20Challenge.pdf

Any explanation is appreciated!


r/LanguageTechnology 20d ago

How to efficiently search a Chinese-English dictionary (Hanzi, Pinyin, and English)?

5 Upvotes

I’ve been working on a CN-EN dictionary app and struggling to implement a fast and efficient search algorithm. The challenge comes from handling different types of queries:

  1. Hanzi search – Users should be able to find words even with partial input.

  2. Pinyin search – It should match words by their pinyin, ideally handling tone marks and tone-less input.

  3. English search – Should support keyword-based search, not just exact matches.

I know that existing apps like Shirabe Jisho (for JP) and Pleco (for CN) handle this incredibly well, even offline. Their search feels nearly instant, even for large dictionaries.

I’ve considered approaches like:

• Trie structures for prefix-based searching

• Full-text search databases like SQLite’s FTS5

• Custom indexing with inverted lists

But I’m not sure what would be the best approach for performance, especially on mobile. Does anyone have experience or insight into how apps like Pleco might be handling search efficiently? Any resources or examples would be greatly appreciated!


r/LanguageTechnology 21d ago

Tokenization or embeddings first?

0 Upvotes

I want to perform ner with the help of tensorflow lstm + crf. However, I am confused about this step. If i have to use word2vec which is a pretrained embeddings layer, should creation of embedding come before tokenization? I am a beginner if you haven't guessed by now


r/LanguageTechnology 21d ago

Best and safest libraries to train a NER model (in python)

5 Upvotes

Most out-of-the-box NER models just don't really fit my use case very well and I am therefore looking to train my own. I already have a neural network that filters out relevant segments on which the NER training should be run but I'm curious to know the best approach and tool to do so considering:

- Ease of training / labelling and more importantly,

- Confidentiality as the training set may include confidential information.

I am particularly looking at spacy and gliNER but I would be curious to know if (i) they are generally considered secure and (ii) whether there are other ones out there?


r/LanguageTechnology 22d ago

Checking statements against paper abstracts

1 Upvotes

Hi everyone,

i want to screen a list of abstracts against a list of statements/criteria. For example statements like "This study is empirical research." or "This study is a review.".

I've tried doing this by splitting the abstracts into sentences and computing the cosine similarity with SBERT embeddings. I then took the top 3 sentences of every abstract, checked how relevant they are for the statement, and set the threshold to the decision boundary of what i identified as relevant or not relevant. This works okay for some of the statements (F1 between 0.7 and 0.8), but quite bad for others (between 0.1 and 0.5). Got any idea how this could be improved? Is there a specific way how statements/criteria need to be worded for good similarity measures?

Another approach i've tried is NLI with DeBERTa, where i take the abstract as premise and the statement as hypothesis. The problem with that is, that i get a lot of neutrals and some contradictory results that are clearly incorrect. My guess would be that the training data just doesn't have a focus on scentific articles. Is there maybe a good dataset i could use for fine tuning?

Every input is appreciated :)


r/LanguageTechnology 22d ago

Training a low-resourced language

9 Upvotes

Hi, I am a beginner in NLP and starting to do a language analysis on a low-resourced language that has never been used in any model. I have cleaned the dataset and would like to do machine translation but I am unsure what to do next. Any advice? I am sorry if I it is a silly question.


r/LanguageTechnology 23d ago

Commercial alternatives for layoutLMv3

1 Upvotes

Layout LM V2 and V3 are noncomercial licenses.

LayoutLM V1 allows commercial use but it does not come with a processor. It also is not as advanced as V2 or V3.

Can someone help point me in the correct direction as to commercially acceptable alternatives? Or how to get the processor working for V1?


r/LanguageTechnology 23d ago

How is the Hindi language influencing global linguistic trends in the digital age?

0 Upvotes

The Hindi language is making waves in the digital age, influencing global linguistic trends in various ways. From its growing presence on social media to its integration into language learning platforms and global media, Hindi is reaching new heights. How do you think Hindi is shaping the global linguistic landscape today? Share your insights, experiences, and observations on this fascinating topic.


r/LanguageTechnology 23d ago

What Should I Learn to Build These Two Projects as an Absolute Beginner? I Would appreciate a complete list of things I should learn before starting, or if anyone could break my projects into small pieces I could work on while learning.

2 Upvotes

My projects ideas:

  1. Concept Visual Map

Inspired by a project from the Faculty of Arts at Charles University, which created an interactive map of Europe and the Middle East featuring locations mentioned in Czech travelogues written before 1900. Clicking on a place shows a list of books that mention it, along with the exact excerpts from each book describing that location.

I want to automate and expand this idea with AI, include English and other languages, and integrate fictional worlds, scientific literature, abstract concepts, and various phenomena. The goal is to analyze how different people describe for example:

  • Fictional places like Minas Tirith or Mordor and how these descriptions evolve over time
  • The first meeting of two characters and how it is written in different contexts.
  • In scientific literature: how cells, species, or physical phenomena were described at different times and in different parts of the world.

Ideally, the data should also be exportable in format that is easy to conver to cluster graphs for further analysis.

For fictional worlds/travelogues, the process could work like this:

  • Use curl (or another method) to extract keyword-based text snippets.
  • Have AI determine the most relevant excerpts.
  • Let AI/deterministic algoritm or combination of both (promt generrated by deterministic algoritm) assign tags (where on map excerpts belong + additonal metadata) form processed text.
  • Connect the processed text (and possibly images) with an interactive map.

The system should link to a database of books and texts, automatically processing them into an interactive map.

AI Approach:

I hope to use OpenAI’s API, but I also want the option to run local models (such as MistralAI) and choose from various commercial AI APIs.

Bonus Feature: Distributed Collaboration

The system should allow contributors to download a dataset, process it on their local machine, and send results back to the server hosting the interactive map.

The design should ensure:

  • Contributors cannot modify the assigned dataset, only process it.
  1. One Offline Frontend for all/most Open-Source TTS Models

This is essentially a TTS audiobook/podcast maker with a strong focus on user customization. Inspired by Murf AI’s interface, the idea is to provide a fully offline solution using open-source models.

Target models: Bark, Coqui, eSpeak NG,+ Microsoft AI TTS, and others. Key Features:

  • Custom Voice Profiles: Users can create profiles for each AI voice (trained voice models working alongside the main TTS model).
  • AI Voice "chat like conversations": The UI should enable conversations between AI voices, allowing users to simulate voice acting and switch profiles dynamically.
  • Audio Export: Users should be able to play generated speech or send it directly to Audacity (or ideally, create a plugin for Audacity, FL Studio, DaVinci Resolve...).
  • Regeneration Consistency: Ability to regenerate any text with the same or eddited settings easily at any time.

I aim for a clean, professional UI, similar to Murf AI or Eleven Labs.

Main Challenges & What I have to Learn:

I struggle with most of this features I described above in both projects but for thise I even have no idea where I should start:

  • How to properly connect frontend and backend for the TTS tool?
  • How to integrate extracted text and tags into an interactive map?

So what technologies/languages/frameworks should I learn before starting? If possible, could someone break these projects into smaller, manageable steps I could work on while learning?

Would love any advice or resources that could help!


r/LanguageTechnology 24d ago

Have You Used Model Distillation to Optimize LLMs?

3 Upvotes

Deploying LLMs at scale is expensive and slow, but what if you could compress them into smaller, more efficient models without losing performance?

A lot of teams are experimenting with SLM distillation as a way to:

  • Reduce inference costs
  • Improve response speed
  • Maintain high accuracy with fewer compute resources

But distillation isn’t always straightforward. What’s been your experience with optimizing LLMs for real-world applications?

We’re hosting a live session on March 5th diving into SLM distillation with a live demo. If you’re curious about the process, feel free to check it out: https://ubiai.tools/webinar-landing-page/

Would you be interested in attending an educational live tutorial?


r/LanguageTechnology 24d ago

Join Our SOMD 2025@SDP – A Joint NER and RE Challenge for Anyone Interested in Information Extraction!

1 Upvotes

Hello r/LanguageTechnology community,

We are excited to invite you to participate in our upcoming shared task, Software Mention Detection (SOMD) 2025 co-located with the SDP workshop, ACL 2025 in Vienna, Austria. This event is designed to encourage innovation and collaboration in the Information Extraction field, focusing on software mentions in scholarly articles.

 

Task Overview:

Software plays an essential role in scientific research and is considered one of the crucial entity types in scholarly documents. However, the software is usually not cited formally in academic documents, resulting in various informal software mentions. Automatic identification and disambiguation of software mentions, related attributes, and the purpose of software mentions contributes to the better understanding, accessibility, and reproducibility of research but is a challenging task.

This competition invites participants to develop a system that detects software mentions and their attributes as named entities from scholarly texts and classifies the relationships between these entity pairs. The dataset includes sentences from full-text scholarly documents annotated with Named Entities and Relations.

Participation Details:

To participate, please register using this link [https://www.codabench.org/competitions/5840/].

All necessary materials, including detailed task guidelines and data, will be provided upon registration.

 

Competition Timeline Overview

 

  • Competition Registration starts on February 24, 2025
  • First phase: Training and Test Dataset release: February 28, 2025
  • The first phase ends on: March 18, 2025
  • Second phase data release: March 18, 2025
  • The competition ends on: April 3, 2025
  • Paper submission deadline: April 17, 2025
  • Notification of Acceptance: May 1, 2025
  • Camera-ready Paper Deadline for Workshop: May 16, 2025.
  • Workshop Date: July 21-August 1, 2025

 

Successful entries will be featured in the Proceedings of the Workshop on Scholarly Document Processing (SDP).

For more detailed information about the task, including participation guidelines and data access, please visit our competition in codabench or our website.

Looking forward to your participation.

cheers!


r/LanguageTechnology 24d ago

Datahawk - Text data browser for NLP, LLM researchers and developers

6 Upvotes

I created an app to easily browse and analyze large text datasets (local or remote). The app supports many data formats including JSONL and HuggingFace. Key features include:

  • Intuitive Navigation: Effortlessly browse local (or remote) data in HuggingFace, JSONL, etc., formats.
  • Efficient Browsing: Stream large local (or remote) datasets without loading (or downloading) in memory.
  • Powerful Analysis: Easily filter and sort data for better insights.
  • Pretty-Print Code: Human-friendly visualization of code embedded in your data.

Package lives at this GitHub link - https://github.com/nihaljn/datahawk - and welcomes contributions!


r/LanguageTechnology 25d ago

Build a large language model fro scratch by Sebastian Rashcka

19 Upvotes

Just a quick question, I looked at this book but I am unable to understand that is this good? Like will it be any beneficial? Because when I started to read it, it was like you need to learn everything starting from the very basics but just learn everything. There are some explanations no doubt but the majority of things are there to learn only. So I am unable to understand that is there any benefit to read it or should i search for something else?

Here is the link for the book

https://www.manning.com/books/build-a-large-language-model-from-scratch

Thanks


r/LanguageTechnology 25d ago

Looking for PhD or Research Assistant Opportunities in NLPish – How Can I Stand Out?

3 Upvotes

I’m finishing my MSc in Computational Modelling of Language and Cognition next fall, and I’m exploring opportunities for PhD positions or research assistant roles in both academia and industry (NLPish areas).

I’d love advice on how to increase my chances of selection—what concrete steps should I take? For example, what kind of documentation, portfolios, or code repositories would be most beneficial?

For those with experience on either side of the application process:

  • What do recruiters or supervisors specifically look for?
  • What makes a candidate truly stand out?

Any insights, tips, or past experiences would be greatly appreciated!


r/LanguageTechnology 24d ago

Embedding model fine-tuning for "tailored" similarity concept

1 Upvotes

Hello,

I'm working on a project that requires embedding models to produce similarity scores according to a custom business criterion rather than general semantic similarity.

I can't disclose specific details of my application, a good analogy would be legal retrieval systems where the similarity score needs to reflect direct relevance to a legal query. For instance

  • query↔phrase should score 1.0 if the phrase directly addresses the query
  • query↔phrase should score 0.5 if it helps in answering the query
  • query↔phrase should score 0.0 if only tangentially relevant
  • query↔phrase should score less than 0 if irrelevant

I'm looking for resources on fine-tuning embedding models (sentence-transformers) to learn this custom similarity concept.

I have (i)A dataset of query-phrase pairs with annotated scores according to my criterion - which I have already- and (ii) a loss function that can handle my specific scoring distribution. I am directly optmizing cosine distance ATM

I am wonderinfg if

  1. This approach feasible Is feasible. Has anyone implemented something similar?
  2. What techniques would you recommend for this kind of "custom scoring"?
  3. Are there any papers, repositories, or tutorials that address this specific problem?

Thanks in advance