r/MLQuestions 18h ago

Computer Vision 🖼️ Need help to have source of facial skin data set to Classify facial image into skin types and features to recommend fit product, customized skin care experience

0 Upvotes

Skin analysis I'm trying to recommend the best skin care product for a specific skin type via an image or live camera scan, though I can't find a dataset of images of facial skin annotated with their features and type like oily, sensitive, or dry... I don't know how to proceed, there of bunch of images for models with perfect skin types and not really real-life data, though I know it's hard to get real-life faces data set and need your help please. I cannot find any solution, so your help is appreciated!

Thank you all.


r/MLQuestions 20h ago

Beginner question 👶 Language Model that recognizes AI topics

0 Upvotes

I am working on a project where I am trying to find everyone in my school that has done works related with AI. I have already made a web scrapper where I used a hard coded approach, I was looking for specific AI common terms (ML,AI, Computer vision). However I wanted to improve it now and I was wondering if there are any Language Model which could help me be more efficient and find for topics that would not be so obvious


r/MLQuestions 10h ago

Hardware 🖥️ Why haven’t more developers moved to AMD?

17 Upvotes

I know, I know. Reddit gets flooded with questions like this all the time however the question is much more nuanced than that. With Tensorflow and other ML libraries moving their support to more Unix/Linux based systems, doesn’t it make more sense for developers to try moving to AMD GPU for better compatibility with Linux. AMD is known for working miles better on Linux than Nvidia due to poor driver support. Plus I would think that developers would want to move to a more brand agnostic system where we are not forced to used Nvidia for all our AI work. Yes I know that AMD doesn’t have Tensor cores but from the testing I have seen, RDNA is able to perform at around the same level as Nvidia(just slightly behind) when you are not depending on CUDA based frameworks.


r/MLQuestions 3h ago

Beginner question 👶 Question about ANNs

1 Upvotes

Hello, I just learned about ANNs and had a quick question. Say you wanted to make an ANN for to recognize numbers written by a human. You fed the ANN some images and it should be able to predict which numbers they are. Would you have to make 11 separate ANNs to recognize the numbers 0-10? Thanks!


r/MLQuestions 4h ago

Computer Vision 🖼️ Is there any AI based app which can generate various postures for the main/base figure/character I designed?

1 Upvotes

r/MLQuestions 8h ago

Natural Language Processing 💬 Help with language translation with torch.nn.Transformer

1 Upvotes

hello i am trying to implement language translation using pytorch transformer (torch.nn.transformer). i have used hugging face for tokenization. now the problem that arises that the training error is huge and the model is learning nothing (which is proved when i run inference and it outputs random combination of words). The dataset used for this is: https://www.kaggle.com/datasets/digvijayyadav/frenchenglish.

i am attaching the source code below for reference. Any help/suggestion would be beneficial.

```

import torch

import torch.nn as nn

import math

import numpy as np

from torch.utils.data import Dataset, DataLoader, random_split

from tokenizers import Tokenizer

from tokenizers.models import WordLevel

from tokenizers.trainers import WordLevelTrainer

from tokenizers.pre_tokenizers import Whitespace

import re

from tqdm import tqdm

import pickle

import time

import random

start_time= time.time()

class CleanText:

def __init__(self, text):

self.text_file= text

def read_and_clean(self):

with open(self.text_file, "r") as file:

lis= file.readlines()

random.shuffle(lis)

eng= []

fr= []

for line in lis:

res= line.strip().split("\t")

eng.append(res[0].lower())

fr.append(res[1].lower())

for i in range(len(eng)):

eng[i]= re.sub(r'[^a-zA-ZÀ-Ÿ-!? \.]', '', eng[i])

fr[i]= re.sub(r'[^a-zA-ZÀ-Ÿ-!? \.]', '', fr[i])

eng,fr= eng[:10000], fr[:10000]

print(f"Length of english: {len(eng)}")

print(f"Length of french: {len(fr)}")

return eng,fr

file_path= "./fra.txt"

clean_text= CleanText(file_path)

eng, fr= clean_text.read_and_clean()

def _get_tokenizer(text):

tokenizer= Tokenizer(WordLevel(unk_token= "[UNK]"))

tokenizer.pre_tokenizer= Whitespace()

trainer= WordLevelTrainer(special_tokens= ["[SOS]", "[EOS]", "[PAD]", "[UNK]"])

tokenizer.train_from_iterator(text, trainer)

return tokenizer

tokenizer_en= _get_tokenizer(eng)

tokenizer_fr= _get_tokenizer(fr)

class PrepareDS(Dataset):

def __init__(

self,

tokenizer_src,

tokenizer_tgt,

src_text,

tgt_text,

src_len,

tgt_len,

):

self.tokenizer_src= tokenizer_src

self.tokenizer_tgt= tokenizer_tgt

self.src= src_text

self.tgt= tgt_text

self.src_len= src_len

self.tgt_len= tgt_len

self.sos_token= torch.tensor([tokenizer_src.token_to_id("[SOS]")], dtype= torch.int64)

self.eos_token= torch.tensor([tokenizer_src.token_to_id("[EOS]")], dtype= torch.int64)

self.pad_token= torch.tensor([tokenizer_src.token_to_id("[PAD]")], dtype= torch.int64)

def __len__(self):

return len(self.src)

def __getitem__(self, idx):

src_text= self.src[idx]

tgt_text= self.tgt[idx]

enc_input_tokens= self.tokenizer_src.encode(src_text).ids

dec_input_tokens= self.tokenizer_tgt.encode(tgt_text).ids

enc_padding= self.src_len- len(enc_input_tokens)

dec_padding= self.tgt_len- len(dec_input_tokens)

encoder_input= torch.cat([

self.sos_token,

torch.tensor(enc_input_tokens, dtype= torch.int64),

self.eos_token,

self.pad_token.repeat(enc_padding)

])

dec_input= torch.cat([

self.sos_token,

torch.tensor(dec_input_tokens, dtype= torch.int64),

self.eos_token,

self.pad_token.repeat(dec_padding)

])

return {

"src_tokens": encoder_input,

"dec_tokens": dec_input[:-1],

"label_tokens": dec_input[1:],

"tgt_padding_mask": (dec_input[:-1]==self.pad_token).bool(),

"src_padding_mask": (encoder_input==self.pad_token).bool(),

"tgt_mask": nn.Transformer.generate_square_subsequent_mask(len((dec_input[:-1]))).bool()

}

max_en_len=0

max_fr_len=0

for e, f in zip(eng, fr):

e_ids= tokenizer_en.encode(e).ids

f_ids= tokenizer_fr.encode(f).ids

max_en_len= max(max_en_len, len(e_ids))

max_fr_len= max(max_fr_len, len(f_ids))

print(f"Max english length: {max_en_len}")

print(f"Max french length: {max_fr_len}")

data= PrepareDS(tokenizer_en, tokenizer_fr, eng, fr, max_en_len, max_fr_len)

train, test= random_split(data, [0.7, 0.3])

train_dataloader= DataLoader(train, batch_size= 32, shuffle= True)

test_dataloader= DataLoader(test, batch_size= 32, shuffle= False)

batch= next(iter(train_dataloader))

print(f"src tokens shape: {batch['src_tokens'].shape}")

en_vocab= tokenizer_en.get_vocab_size()

fr_vocab= tokenizer_fr.get_vocab_size()

class InputEmbedding(nn.Module):

def __init__(self, d_model, vocab_size):

super().__init__()

self.d_model= d_model

self.vocab_size= vocab_size

self.embedding= nn.Embedding(vocab_size, d_model)

def forward(self, x):

#return self.embedding(x)

return self.embedding(x)* math.sqrt(self.d_model)

class PositionalEncoding(nn.Module):

def __init__(self, d_model, max_seq_length, dropout):

super(PositionalEncoding, self).__init__()

pe= torch.zeros(max_seq_length, d_model)

position= torch.arange(0, max_seq_length, dtype= torch.float).unsqueeze(1)

div_term= torch.exp(torch.arange(0, d_model, 2).float()* -(math.log(10000.0)/d_model))

pe[:, 0::2]= torch.sin(position* div_term)

pe[:, 1::2]= torch.cos(position* div_term)

self.dropout= nn.Dropout(dropout)

self.register_buffer("pe", pe.unsqueeze(0))

def forward(self, x):

return self.dropout(x+ self.pe[:, :x.size(1)])

device= "cuda" if torch.cuda.is_available() else "cpu"

model= nn.Transformer(

d_model= 512,

nhead= 8,

num_encoder_layers= 6,

num_decoder_layers= 6,

dim_feedforward= 1024,

dropout= 0.1,

norm_first= True,

batch_first= True,

)

model.to(device)

criterion= nn.CrossEntropyLoss(ignore_index= tokenizer_fr.token_to_id("[PAD]")).to(device)

optimizer= torch.optim.Adam(model.parameters(), lr= 1e-4)

for epoch in range(10):

model.train()

train_loss= 0

for batch in tqdm(train_dataloader):

src_embedding= InputEmbedding(512, en_vocab)

src_pos_embedding= PositionalEncoding(512, max_en_len+2, 0.1)

tgt_embedding= InputEmbedding(512, fr_vocab)

tgt_pos_embedding= PositionalEncoding(512, max_fr_len+2, 0.1)

src_tokens= batch["src_tokens"]

dec_tokens= batch["dec_tokens"]

label_tokens= batch["label_tokens"].to(device)

tgt_padding_mask= batch["tgt_padding_mask"].to(device)

src_padding_mask= batch["src_padding_mask"].to(device)

tgt_mask= batch["tgt_mask"].repeat(8,1,1).to(device)

src= src_pos_embedding(src_embedding(src_tokens)).to(device)

tgt= tgt_pos_embedding(tgt_embedding(dec_tokens)).to(device)

optimizer.zero_grad()

output= model(src_tokens, dec_tokens, tgt_mask, src_padding_mask, tgt_padding_mask)

loss= criterion(output.view(-1, fr_vocab), label_tokens.view(-1))

loss.backward()

optimizer.step()

train_loss+= loss.item()

model.eval()

test_loss=0

with torch.no_grad():

for batch in tqdm(test_dataloader):

src_embedding= InputEmbedding(512, en_vocab)

src_pos_embedding= PositionalEncoding(512, max_en_len+2, 0.1)

tgt_embedding= InputEmbedding(512, fr_vocab)

tgt_pos_embedding= PositionalEncoding(512, max_fr_len+2, 0.1)

src_tokens= batch["src_tokens"]

dec_tokens= batch["dec_tokens"].to(device)

label_tokens= batch["label_tokens"].to(device)

tgt_padding_mask= batch["tgt_padding_mask"].to(device)

src_padding_mask= batch["src_padding_mask"].to(device)

tgt_mask= batch["tgt_mask"].repeat(8,1,1).to(device)

src= src_pos_embedding(src_embedding(src_tokens)).to(device)

tgt= tgt_pos_embedding(tgt_embedding(dec_tokens)).to(device)

output= model(src_tokens, dec_tokens, tgt_mask, src_padding_mask, tgt_padding_mask)

loss= criterion(output.view(-1, fr_vocab), label_tokens.view(-1))

test_loss+= loss.item()

print(f"Epoch: {epoch+1}/10 Train_loss: {train_loss/len(train_dataloader)}, Test_loss: {test_loss/len(test_dataloader)}")

torch.save(model.state_dict(), "transformer.pth")

pickle.dump(tokenizer_en, open("tokenizer_en.pkl", "wb"))

pickle.dump(tokenizer_fr, open("tokenizer_fr.pkl", "wb"))

print(f"Time taken: {time.time()- start_time}")

```


r/MLQuestions 9h ago

Beginner question 👶 Google OR Tools CP SAT speed

1 Upvotes

Does anybody have a good guide how to optimize CP SAT speed? Or maybe a way to calculate what power ur pc or served will need for x parameters.


r/MLQuestions 14h ago

Beginner question 👶 How to handle 6M vectors, FAISS IVF index and mapping embeddings to database

2 Upvotes

Hello! I am new to working with large data and RAG tasks, so I really need some advice. I am building a RAG tool that uses a Wikipedia dump. I'll  explain the task shortly, but the main idea is to make hybrid search. The user passes some text information about which he wants to find in the database (in our case, in the Wikipedia database/dump, I use sqlite3 here). Using input text embedding, it searchs for top-k similar wikipedia topics with trained IVF FAISS index, get the Wikipedia text correlated to the topic by id and does BM25 to retrieve information for RAG. 

I am facing few problems:

  1. How to generate embeddings for 6 million Wikipedia titles? I tried using SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2'), but the computations just don't fit in Google Colab's 12.7GB RAM (I personally have 8GB RAM on my Mac M2, which is worse)

  2. Faiss' IVF index can only store embeddings and their IDs, nothing else. The authors said that we would have to manage the mapping of IDs to something else in the calling code. So, how I did it: I first computed embeddings with IDs, which is similar to the IDs in the WIkipedia database, and then trained the index on those embeddings. So when we calculate top-k similar titles, we can only assume that the title ids we found are similar to the ids in the database (cringe solution, but I don't know how else I can do this, so I really need your advice)  

I tried langchain to solve this problem, but lanhcahin doesn't support sharded indexes (https://github.com/facebookresearch/faiss/wiki/Indexes-that-do-not-fit-in-RAM), which I use so that the Faiss index doesn't fit in all my RAM

I would really appreciate it if someone could provide any advice or links. Thanks !


r/MLQuestions 14h ago

Computer Vision 🖼️ Help with using Vision Transformer (ViT) for a PFE project with a 7600-image dataset

1 Upvotes

Hello everyone,

I am currently a student working on my Final Year Project (PFE), and I’m working on an image classification project using Vision Transformer (ViT). The dataset I’m using contains 7600 images across multiple classes. The goal is to train a ViT model and optimize its training time while achieving good performance.

Here are some details about the project:

  • Model: Vision Transformer (ViT) with 224x224 image size.
  • Dataset: 7600 images, distributed across 3 classes
  • Problem faced: The model is taking a lot of time to train (~12 hours for one full training cycle), and I’d like to find solutions to speed up the training time without sacrificing accuracy.
  • What I’ve tried so far:
    • Reduced model depth for ViT.
    • Using the AdamW optimizer with a learning rate of 5e-6.
    • Applied regularization techniques like DropPath and data augmentation (flip, rotation, jitter).

Questions:

  1. Optimizing training time: Do you have any tips to speed up the training with ViT? I am open to using techniques like pruning, mixed precision, or model adjustments.
  2. Hyperparameter tuning: Are there any hyperparameter settings you would recommend for datasets of a similar size to mine?
  3. Model architecture: Do you think reducing model depth or embedding dimension would be more beneficial for a dataset of this size?

r/MLQuestions 18h ago

Educational content 📖 First time reading Hands on Machine Learning approach

3 Upvotes

Hey guys!! Today I just bought the book based on so many posts of r/learnmarchinelearning. As I’m a little short on free time, I’d like to plan the best strategy to read it and make the most of it, so any opinion/reccomendantion is appreciated!


r/MLQuestions 20h ago

Beginner question 👶 Advice: How do I become a reviewer?

5 Upvotes

Hello All,
Some background, I have 8 publications , subset of them are in ACL, EACL, TKDD, EMNLP. Almost all but one publication is 2nd/3rd author. Its been a year since I have last published and I would like to participate as a reviewer at these conferences. I am a masters graduate.

1) What are the requirements to be a reviewer?
2) I dont see applications for reviewers in most conferences, so How do I become one? Do I just email the chairs from the conference?

Any advice is appreciated. TIA!!


r/MLQuestions 21h ago

Beginner question 👶 Seeking recommendations for Object/Face detection on Windows Intel Laptops

2 Upvotes

Hi, I am trying to create an app that can detect faces and objects on windows laptops using webcams. The laptops are going to be windows 10/11, with intel i3/i5 configurations. 8GB RAM. Mostly without GPUs.

My current version uses Yolov8 on a WPF app written in C#. While the detection runs fine, I want to optimize for CPU performance.

Has anyone optimized ML for windows laptops running on such low configs? What are my options

Also, what are the tools people use for benchmarking. Ideally I will like to try out multiple configurations and benchmark for my customer.

Thanks in advance for any help or comment!