<h3align="center">LSM Group: Knowledge Graphs - Mini Project - Summer Term 2021</h3>

...

...

@@ -35,6 +35,9 @@ This repository represents our work regarding the mini-project for the Foundatio

<li>

<ahref="#usage">Usage</a>

</li>

<li>

<ahref="#other-approaches">Other Approaches</a>

</li>

<li><ahref="#contact">Contact</a></li>

</ol>

</details>

...

...

@@ -64,17 +67,8 @@ imbalanced-learn (https://github.com/scikit-learn-contrib/imbalanced-learn). In

new synthetic data points for the smaller class, each of which lies on the line between two data points of this class.

Using this technique and a Linear SVM, we were able to at least slightly improve the problem of overweighting the negative class.

Using this we could achieve F1-Scores ranging from \<lower_bound> up to \<higher_bound> for the given test lps.

We split the data into learning and test in a ratio of \<ratio>.

## Other approaches

We tried out several different approaches to tackle the given task of classifying entities. These approaches can be found

in the folder "other_approaches" as Jupyter notebooks.

### SKLEARN Clustering

In the notebook "dbscan_clustering.ipynb" we explored the possibility to use clustering algorithms defined in sklearn to classify the given entities. Here we choose DBSCAN, as SKLEARN states it working well with imbalanced datasets. Unfortunately, the approach did not yield good results and was therefore no longer pursued.

### PyTorch Geometric Graph Neural Network

A second approach was the implementation of a graph neural network from the library pytorch_geometric, i.e. a deep learning approach. The idea was to use a graph neural network for classification based on the labels of the learning problems and the edges of the knowledge graph. The first step was to fit the network using the train data and the CrossEntropyLoss as metric and, after that, classify all individuals (even the ones used for training). The network computes a probability distribution over the labels for each individual and the individuals are assigned to the class with the highest probability. However, since the data are very imbalanced, all individuals are assigned to the negative (excluded) class and the F1-score was not very meaningful. Unfortunately, it was not possible to find a solution for this problem, hence this approach was no longer in our interest.

<!--Using this we could achieve F1-Scores ranging from \<lower_bound> up to \<higher_bound> for the given test lps.

We split the data into learning and test in a ratio of \<ratio>.-->

<!-- PREREQISITES -->

### Prerequisites

...

...

@@ -127,12 +121,18 @@ the file can be on its own to generate embeddings:

Output predictions for all learning problems in turtle syntax into predictions.ttl

## Other Approaches

We tried out several different approaches to tackle the given task of classifying entities. These approaches can be found

in the folder "other_approaches" as Jupyter notebooks.

### SKLearn Clustering

In the notebook "dbscan_clustering.ipynb" we explored the possibility to use clustering algorithms defined in SKLearn to classify the given entities. Here we choose DBSCAN, as SKLearn states it working well with imbalanced datasets. Unfortunately, the approach did not yield good results and was therefore no longer pursued.

### PyTorch Geometric Graph Neural Network

A second approach was the implementation of a graph neural network from the library pytorch_geometric, i.e. a deep learning approach. The idea was to use a graph neural network for classification based on the labels of the learning problems and the edges of the knowledge graph. The first step was to fit the network using the train data and the CrossEntropyLoss as metric and, after that, classify all individuals (even the ones used for training). The network computes a probability distribution over the labels for each individual and the individuals are assigned to the class with the highest probability. However, since the data are very imbalanced, all individuals are assigned to the negative (excluded) class and the F1-score was not very meaningful. Unfortunately, it was not possible to find a solution for this problem, hence this approach was no longer in our interest.