PyNeuraLogic

https://badge.fury.io/py/neuralogic.svg https://img.shields.io/pypi/l/neuralogic https://github.com/LukasZahradnik/PyNeuraLogic/actions/workflows/tests.yml/badge.svg https://readthedocs.org/projects/pyneuralogic/badge/?version=latest

PyNeuraLogic lets you use Python to create Differentiable Logic Programs


Logic programming is a declarative coding paradigm in which you declare your logical variables and relations between them. These can be further composed into so-called rules that drive the computation. Such a rule set then forms a logic program, and its execution is equivalent to performing logic inference with the rules.

PyNeuralogic, through its NeuraLogic backend, then makes this inference process differentiable which, in turn, makes it equivalent to forward propagation in deep learning. This lets you learn numeric parameters that can be associated with the rules, just like you learn weights in neural networks.

What is this good for?

Many things! For instance - ever heard of Graph Neural Networks (GNNs)? Well, a graph happens to be a special case of a logical relation - a binary one to be more exact. Now, at the heart of any GNN model there is a so-called propagation rule for passing ‘messages’ between the neighboring nodes. Particularly, the representation (‘message’) of a node X is calculated by aggregating the representations of adjacent nodes Y, i.e. those with an edge between X and Y.

Or, a bit more ‘formally’:

Relation.node2(Var.X) <= (Relation.node1(Var.Y), Relation.edge(Var.Y, Var.X))

…and that’s the actual code! Now for a classic learnable GNN layer, you’ll want to add some numeric parameters, such as

Relation.node2(Var.X)[5,10] <= (Relation.node1(Var.Y)[10,20], Relation.edge(Var.Y, Var.X))

to project your [1,20] input node embeddings through a learnable [10,20] layer before the aggregation, and subsequently a [5,10] layer after the aggregation. The particular aggregation and transformation functions, as well as other details, can naturally be specified further, but you can as well leave it default like we did here with your first, fully functional GNN layer!

How is it different from other GNN frameworks?

Naturally, PyNeuralogic is by no means limited to GNN models, as the expressiveness of relational logic goes much further beyond graphs. So nothing stops you from playing directly with:

  • multiple relations and object types

  • hypergraphs, nested graphs, relational databases

  • alternative propagation schemes

  • direct sub-structure (pattern) matching

  • inclusion of logical background knowledge

  • and more…

In PyNeuraLogic, all these ideas take the same form of simple small logic programs. These are commonly highly transparent and easy to understand, thanks to their declarative nature. Consequently, there is no need to design a new blackbox class name for each small modification of the GNN rule - you code directly at the level of the logical principles here!

The backend engine then creates the underlying differentiable computation (inference) graphs in a fully automated and dynamic fashion, hence you don’t have to care about aligning everything into some (static) tensor operations. This gives you considerably more expressiveness, and, perhaps surprisingly, sometimes even performance.

We hope you’ll find the framework useful in designing your own deep relational learning ideas beyond the GNNs! Please let us know if you need some guidance or would like to cooperate!

Examples

Papers