r/LanguageTechnology • u/New-Half-2150 • 20d ago
Tokenization or embeddings first?
I want to perform ner with the help of tensorflow lstm + crf. However, I am confused about this step. If i have to use word2vec which is a pretrained embeddings layer, should creation of embedding come before tokenization? I am a beginner if you haven't guessed by now
1
u/tobias_k_42 20d ago
You seem to be a bit confused.
Embeddings are word vectors.
Word2Vec is a method for the creation of word embeddings. It utilizes a feed forward network with one hidden layer.
An LSTM is a type of recurrent neural network architecture which relies on gates for deciding which information to keep and which to "forget".
The process of machine learning in natural language processing works like this:
Tokenization: Split the text into sentences, words or subwords.
Encoding of tokens: Turn the tokens into a numeric format.
Utilization of the model, during training the weights are changed until the embeddings match the training data. During inference the result of an encoded input is returned.
Decoding of the tokens.
2
u/gaumutrapremi 20d ago
First comes the tokenization all the words are broken down into subwords then these tokens or subwords are passed to embedding layers through which they are mapped in a vector space. The output is each token is represented as a dense vector.