Hi Pythonistas!
So far, we’ve worked with data that looks like:
- Vectors
- Images
- Sequences
But not all data fits neatly into grids or timelines. Some data is all about connections.
- Social networks
- Road maps
- Molecules
- Knowledge graphs
This is where Graph Neural Networks (GNNs) come in.
Why Graphs Are Different
In a graph:
- Data points are nodes
- Relationships are edges
- Unlike images or text: There is no fixed order
- No fixed shape
- Structure itself carries meaning
- Who is connected to whom matters as much as the data itself.
The Core Idea of GNNs
GNNs learn by passing messages.
Each node:
- Looks at its neighbors
- Collects information from them
- Updates its own representation
- This process is called message passing.
Over multiple steps:
- Information flows across the graph
- Nodes learn from both local and global structure
- Think of It Like This
Imagine asking a person for an opinion.
They:
- Think about their own view
- Consider what their friends think
- Adjust their answer
That’s exactly how a GNN updates a node.
What Do GNNs Learn?
GNNs can learn:
- Node-level properties (e.g. user classification)
- Edge-level properties (e.g. link prediction)
- Graph-level properties (e.g. molecule classification)
- Same idea. Different outputs.
Where GNNs Are Used
GNNs show up in:
- Social network analysis
- Recommendation systems
- Drug discovery and chemistry
- Fraud detection
- Knowledge graphs
Whenever relationships matter, GNNs shine.
Why GNNs Are Powerful
- They respect graph structure
- They generalize to different graph sizes
- They share parameters across nodes
The Challenges
GNNs aren’t perfect.
Common issues:
- Over-smoothing (nodes become too similar)
- Scalability for very large graphs
- Complex training pipelines
Still, they’re one of the most exciting areas in modern deep learning.
What I Learned This Week
- GNNs work on graph-structured data
- Nodes learn by exchanging information
- Message passing is the key idea
- Powerful for relational problems
GNNs teach us an important lesson: sometimes structure matters more than features.
What's Coming Next
In the upcoming post we will learn about internals of LLMs