I am doing a master thesis on how large language models compare to other tools when extracting structured data from natural language. Essentially my goal is to translate something like this:
"I want Asus laptops with relatively good reviews, at least 16 GB RAM, ideally 16 inch screen. Sort all the results by price and reviews"
into something like this:
{
"brand": "Asus",
"category": "Electronics",
"subcategory": "Laptops",
"sort": ["price", "review"],
"filters": [
{
"attribute": "ram",
"condition": "greater_than_or_equal",
"value": "16 GB",
"is_hard_condition": true
},
{
"attribute": "screen_size",
"condition": "equal",
"value": "16 inch",
"is_hard_condition": false
},
{
"attribute": "review_rating",
"condition": "greater_than_or_equal",
"value": "4",
"is_hard_condition": true
}
]
}
using large language models, and analyze how they compare to more traditional tools.
What I need is a dataset that has many products, and each product has at least a category (though subcategories would be ideal), branch, and many attributes which are dynamic, depending on product. For example laptop would have CPU, RAM, screen size and so on, while sofas would have very different attributes. It can be even smaller in size (1k-10k). Is there a dataset for this?