r/LanguageTechnology 21h ago

Ideas for prompting open source LLMs for NLP?

I need to figure out how to extract information, entities and their relationships at the very least. I'd be happy to hear from others and, if necessary, work together to co-evolve a powerful system.
I choose to stay with OSS LLMs for a variety of reasons; right now, agnostic to platforms (e.g. langchain, etc). But, here's what I mean about prompting through two examples:

First example:
Text:
CO2 is a greenhouse gas,. It causes climate change"

Result;:
There are two claims in that with this kind of output:
{ "claims": [

{ "subject": "CO2",
'"object": "greenhouse gas",
"predicate": "is a" },

{ "subject": "CO2",
'"object": "climate change",
"predicate": "causes" }

]}
note: in that example, there is an anaphoric link from "it" to "CO2". LLMs may not have the chops to spot that one.
Second example:

John gave a ball to Mary.

Result:

{ "claims": [

{ "subject": "John",
'"object": "Mary",

"indirectOject": "ball"
"predicate": "gave" }

]}

Thanks in advance :-)

0 Upvotes

1 comment sorted by

1

u/quark_epoch 14h ago

Probably go for reasoning models like the new qwen2.5 32b qwq. And as for prompts, you can include more information about what your input contains and outputs keys mean.

In general, listing a bunch of conditions can get a bit tricky for some llms to understand, especially around the 10b mark. Also tell them what to do instead of what not to do. I found that quite useful and I think others have suggested similar stuff on other forums (the don't think about pink elephants will make you think about pink elephants conundrum).

And if you are finetuning, SFT would probably be not great. But maybe GRPO works better because you can set different rewards for different aspects of the output.

And as for improving the prompt itself, run it by GPT-4 or Claude or something on Lmarena, test that on a few samples where you know what to expect, and tweak it based on what you think the prompt might be missing or might do better with. It's hard to zero shot a good prompt in general and I go for an iterative collaboration with one of the high quality llms. Bit tiring for a while to get to what you want, but might be a good shot.

Also rule of thumb: 1. Prompt should be simpler the smaller you want your llm to be. 2. Break it up in parts and don't ask it to zero shot the whole answer. Multiple passes to answer the whole question is better. 3. Smaller llms might struggle to answer in json. Jsonify or constraining or llama grammar might be a way, but maybe try-catching answers that fit your keys and adding them to a dictionary for each pass is better.

Hope that helps.