Files
aki_prj23_transparenzregister/Jupyter/connection-counter.ipynb
Philipp Horstenkamp a9304201af (chore): Initilised devops tools (#29)
* Added a first action

* Repaired a typo

* Repaired a typo2

* Repaired a typo2

* Added flake8 action

* Repaired a typo in the flake8 action.

* Added a first bandit action

* Added a first batch

* Added a first batch

* Added a first batch

* Added a first batch

* Added a first batch

* Added the flake8-prebuild as a need to flake8

* Added the flake8-prebuild as a need to flake8

* Added the flake8-prebuild as a need to flake8

* Added the docker socket to the volume.

* Added the flake8-prebuild as a need to flake8

* Removed latest part from container.

* Removed latest part from container.

* Removed latest part from container.

* Reworked flake8

* Reworked flake8

* Reworked flake8

* Reworked flake8

* Reworked flake8

* Reworked flake8

* Reworked flake8

* Reworked flake8

* Reworked flake8

* Reworked flake8

* Reworked flake8

* Reworked flake8

* Reworked flake8 poetry

* Reworked flake8 poetry

* Changed to 64bit

* Some edits to the runner

* Added python setup

* Added python -m to python docker image.

* Added python -m to python docker image.

* Added python -m to python docker image.

* Added python -m to python docker image.

* Added python -m to python docker image.

* Added python -m to python docker image.

* Added ra run linter

* Added ra run linter

* Added ra run linter

* Added ra run linter

* Removed redundant version

* Removed redundant version

* Added isort

* Added isort

* Added isort

* Added poetry install

* Added poetry install

* Added flake8 as lint.

* Added flake8 as lint.

* Added flake8 as lint.

* Added flake8 as lint.

* Added flake8 as lint.

* Added flake8 as lint.

* Added flake8 as lint.

* Uses nodejs and python image

* Added flake8 as lint.

* Added flake8 as lint.

* Added flake8 as lint.

* Added flake8 as lint.

* Removed selfhosted runner

* Removed self hosted runner

* Removed self hosted runner

* Removed self hosted runner

* Added black and flake8 tests

* Removed self hosted runner

* Removed self hosted runner

* Removed unneded actions

* Added a mypy error.

* Removed poetry call before boetry setup

* Removed poetry call before poetry setup

* Added a test to understand the poetry action better

* Added a test to understand the poetry action better

* Added a test to understand the poetry action better

* Added a test to understand the poetry action better

* Added a test to understand the poetry action better

* Added a test to understand the poetry action better

* Added the snook poetry builder

* Reworked the repo a bit

* Removed unneeded poetry installation

* Added the isort action

* Added isort test

* Added ruff

* Added full ruff configuration

* Added full ruff configuration2

* Added full ruff configuration2

* Removed duplicat configurations

* Removed some redundant pre-commit hooks

* Removed unneeded actions.

* Removed unneeded actions.

* Repaired ruff

* Added tests.

* Removed

* Removed

* Removed a missing file

* Removed a missing file

* Removed a missing file

* Removed a missing file

* Removed a missing file

* Added reports as artifacts

* Added reports as artifacts

* Added reports as artifacts

* Removed the unneded poetry test

* Added a license checker.

* Added a license checker.

* Removed some unneeded configuration.

* Removed the import reformatted.

* Added doc generation.

* Added doc generation.

* Added license summary.

* Add

* Add lint

* Switched pip-licenses to poetry.

* Switched pip-licenses to poetry.

* Switched pip-licenses to poetry.

* Remove some more packages.

* Remove some more packages.

* Added a make file

* Added a make file

* Added a make file

* Added a make file

* Added a make file

* Added a make file

* Added a make file

* Added a make file

* Added a make file

* Added a make file

* Added a make file

* Added a make file

* Added a make file

* Added a make file

* Added a make file

* Added version codes to the main package

* Changed the format of the md files

* Presentation first draft

* Version up and added extensions

* Version up and added extensions

* Version up and added extensions

* Removed the venv path from docbuild

* Actions version up

* Actions version up

* Actions version up

* Actions version up

* Actions version up

* Actions version up

* Experiements with sphinx

* Experiments with sphinx

* Experiments with sphinx

* Experiments with sphinx

* Experiments with sphinx

* Experiments with sphinx

* Experiments with sphinx

* Experiments with sphinx

* First draft of the sphinx documentation.

* Added the protocol to the time series.

* Added the protocol to the time series.

* First draft ot a first build pipline

* Added mermaid version support

* Added documentations pull and branch request requirements.

* Added documentations pull and branch request requirements.

* Added documentations pull and branch request requirements.

* Added documentations pull and branch request requirements.

* Tests should now be passing

* Tests should now be passing

* Tests should now be passing

* Tests should now be passing

* Tests should now be passing

* Tests should now be passing

* Tests should now be passing

* Tests should now be passing

* Add safety

* Add safety

* Add safety

* Added the action on pull_request_target

* Added the action on pull_request_target

* Added the action on pull_request_target

* Added a pytest coverage report

* Added a pytest coverage report

* Added a pytest coverage report

* Added a pytest coverage report

* Added a pytest coverage report

* Added a build step

* Added a build step

* Added a build step

* Added a build step

* Changed the lint action to work only on python changes.

* Changed the lint action to work only on python changes.

* Changed the lint action to work only on python changes.

* Added the ability to compile a html report

* Added the ability to compile a html report

* Added the ability to compile a html report

* Added the ability to compile a html report

* Added the ability to compile a html report

* Added the ability to compile a html report

* Added the ability to compile a html report

* Added the ability to compile a html report

* Added the ability to compile a html report

* Added the ability to compile a html report

* Added the ability to compile a html report

* Added the ability to compile a html report

* Added the ability to compile a html report

* Added the ability to compile a html report

* Added the ability to compile a html report

* Coverage

* Finished test and build workflow

* Finished test and build workflow

* Finished test and build workflow

* Finished test and build workflow

* Finished test and build workflow

* Finished test and build workflow

* Finished test and build workflow

* Finished test and build workflow

* Finished test and build workflow

* Finished test and build workflow

* Finished test and build workflow

* Finished test and build workflow

* Finished test and build workflow

* Finished test and build workflow

* Finished test and build workflow

* Finished test and build workflow

* Finished test and build workflow

* Finished test and build workflow

* Finished test and build workflow

* Finished test and build workflow

* Repaired a bug.

* Repaired a bug.

* Repaired a bug.

* Repaired a bug.

* Repaired a bug.

* Added a github branch.ref

* Removed a poetry install

* Docbuild now excludes templates

* Added the seminarpräsentation to the documentation build

* Added the seminarpräsentation to the documentation build

* Added the seminarpräsentation to the documentation build

* dded a few images

* Changed the pre-commit image

* Changed the pre-commit image

* Presentation done

* Never executing jupyter for sphinx

* Never executing jupyter for sphinx

* Never executing jupyter for sphinx

* Never executing jupyter for sphinx

* Never executing jupyter for sphinx
2023-06-23 18:47:04 +02:00

13 KiB
Raw Blame History

In [54]:
from typing import Final

import numpy as np
import pandas as pd
In [1]:
from typing import Final

import numpy as np
import pandas as pd

number_of_entries = 100
number_of_contacts = 10
ids: Final = [_ for _ in range(number_of_entries)]
companies = pd.DataFrame(columns=[], index=pd.Index(ids, name="company_id"))
companies


id1 = (
    pd.Series(ids * number_of_contacts, name="Company 1")
    .sample(frac=0.7, random_state=42)
    .reset_index(drop=True)
)
id2 = (
    pd.Series(ids * number_of_contacts, name="Company 2")
    .sample(frac=0.7, random_state=43)
    .reset_index(drop=True)
)
connections = (
    pd.DataFrame(
        [
            id1,
            pd.Series(
                np.random.randint(0, 100, size=(max(len(id1), len(id2)))),
                name="Connection Weight",
            ),
            id2,
        ]
    )
    .T.dropna()
    .astype(int)
)
connections = connections.loc[(connections["Company 1"] != connections["Company 2"])]
connections
Out[1]:
<style scoped=""> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style>
Company 1 Connection Weight Company 2
0 21 83 58
1 37 88 86
2 40 6 83
3 60 35 2
4 11 22 10
... ... ... ...
695 62 37 11
696 10 24 27
697 97 40 55
698 14 87 66
699 50 55 82

693 rows × 3 columns

In [69]:
id1 = (
    pd.Series(ids * number_of_contacts, name="Company 1")
    .sample(frac=0.7, random_state=42)
    .reset_index(drop=True)
)
id2 = (
    pd.Series(ids * number_of_contacts, name="Company 2")
    .sample(frac=0.7, random_state=43)
    .reset_index(drop=True)
)
connections = (
    pd.DataFrame(
        [
            id1,
            pd.Series(
                np.random.randint(0, 100, size=(max(len(id1), len(id2)))),
                name="Connection Weight",
            ),
            id2,
        ]
    )
    .T.dropna()
    .astype(int)
)
connections = connections.loc[(connections["Company 1"] != connections["Company 2"])]
connections
Out[69]:
<style scoped=""> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style>
Company 1 Connection Weight Company 2
0 21 36 58
1 37 59 86
2 40 26 83
3 60 21 2
4 11 2 10
... ... ... ...
695 62 45 11
696 10 64 27
697 97 24 55
698 14 51 66
699 50 93 82

693 rows × 3 columns

In [73]:
connections[["Company 1", "Company 2"]].groupby("Company 1").count()
Out[73]:
<style scoped=""> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style>
Company 2
Company 1
0 6
1 6
2 5
3 9
4 7
... ...
95 7
96 8
97 7
98 6
99 8

100 rows × 1 columns

In [72]:
companies["Analysis-d0"] = 1
companies["Analysis-d1"] = connections[["Company 1", "Company 2"]].groupby("Company 1").count()
connection_sum = connections.join(connections.set_index("Company 2"), on=)
companies["Analysis-d1"] = connections[["Company 1", "Company 2"]].groupby("Company 1").count()
# for tiers in range(5):
companies
Out[72]:
<style scoped=""> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style>
Analysis-d0 Analysis-d1
company_id
0 1 6
1 1 6
2 1 5
3 1 9
4 1 7
... ... ...
95 1 7
96 1 8
97 1 7
98 1 6
99 1 8

100 rows × 2 columns

In [ ]:
companies
In [ ]: