Geometric Deep Learning
🏗️ Representation Learning for NLP All neural‑network (NN) architectures create vector representations—also called embeddings—of the input. These vectors pack statistical and semantic cues that let the model classify, translate, or generate text. The network learns better representations through feedback from a loss function. Transformers build features for each word with an attention mechanism that asks: “How important is every other word in the sentence to this word?” 🔗 GNNs—Representing Graphs Graph Neural Networks (GNNs) or Graph Convolutional Networks (GCNs) embed nodes and edges. They rely on neighbourhood aggregation / message passing: ...