mirror of
https://github.com/fhswf/aki_prj23_transparenzregister.git
synced 2025-09-14 00:11:20 +02:00
* Added a first action * Repaired a typo * Repaired a typo2 * Repaired a typo2 * Added flake8 action * Repaired a typo in the flake8 action. * Added a first bandit action * Added a first batch * Added a first batch * Added a first batch * Added a first batch * Added a first batch * Added the flake8-prebuild as a need to flake8 * Added the flake8-prebuild as a need to flake8 * Added the flake8-prebuild as a need to flake8 * Added the docker socket to the volume. * Added the flake8-prebuild as a need to flake8 * Removed latest part from container. * Removed latest part from container. * Removed latest part from container. * Reworked flake8 * Reworked flake8 * Reworked flake8 * Reworked flake8 * Reworked flake8 * Reworked flake8 * Reworked flake8 * Reworked flake8 * Reworked flake8 * Reworked flake8 * Reworked flake8 * Reworked flake8 * Reworked flake8 poetry * Reworked flake8 poetry * Changed to 64bit * Some edits to the runner * Added python setup * Added python -m to python docker image. * Added python -m to python docker image. * Added python -m to python docker image. * Added python -m to python docker image. * Added python -m to python docker image. * Added python -m to python docker image. * Added ra run linter * Added ra run linter * Added ra run linter * Added ra run linter * Removed redundant version * Removed redundant version * Added isort * Added isort * Added isort * Added poetry install * Added poetry install * Added flake8 as lint. * Added flake8 as lint. * Added flake8 as lint. * Added flake8 as lint. * Added flake8 as lint. * Added flake8 as lint. * Added flake8 as lint. * Uses nodejs and python image * Added flake8 as lint. * Added flake8 as lint. * Added flake8 as lint. * Added flake8 as lint. * Removed selfhosted runner * Removed self hosted runner * Removed self hosted runner * Removed self hosted runner * Added black and flake8 tests * Removed self hosted runner * Removed self hosted runner * Removed unneded actions * Added a mypy error. * Removed poetry call before boetry setup * Removed poetry call before poetry setup * Added a test to understand the poetry action better * Added a test to understand the poetry action better * Added a test to understand the poetry action better * Added a test to understand the poetry action better * Added a test to understand the poetry action better * Added a test to understand the poetry action better * Added the snook poetry builder * Reworked the repo a bit * Removed unneeded poetry installation * Added the isort action * Added isort test * Added ruff * Added full ruff configuration * Added full ruff configuration2 * Added full ruff configuration2 * Removed duplicat configurations * Removed some redundant pre-commit hooks * Removed unneeded actions. * Removed unneeded actions. * Repaired ruff * Added tests. * Removed * Removed * Removed a missing file * Removed a missing file * Removed a missing file * Removed a missing file * Removed a missing file * Added reports as artifacts * Added reports as artifacts * Added reports as artifacts * Removed the unneded poetry test * Added a license checker. * Added a license checker. * Removed some unneeded configuration. * Removed the import reformatted. * Added doc generation. * Added doc generation. * Added license summary. * Add * Add lint * Switched pip-licenses to poetry. * Switched pip-licenses to poetry. * Switched pip-licenses to poetry. * Remove some more packages. * Remove some more packages. * Added a make file * Added a make file * Added a make file * Added a make file * Added a make file * Added a make file * Added a make file * Added a make file * Added a make file * Added a make file * Added a make file * Added a make file * Added a make file * Added a make file * Added a make file * Added version codes to the main package * Changed the format of the md files * Presentation first draft * Version up and added extensions * Version up and added extensions * Version up and added extensions * Removed the venv path from docbuild * Actions version up * Actions version up * Actions version up * Actions version up * Actions version up * Actions version up * Experiements with sphinx * Experiments with sphinx * Experiments with sphinx * Experiments with sphinx * Experiments with sphinx * Experiments with sphinx * Experiments with sphinx * Experiments with sphinx * First draft of the sphinx documentation. * Added the protocol to the time series. * Added the protocol to the time series. * First draft ot a first build pipline * Added mermaid version support * Added documentations pull and branch request requirements. * Added documentations pull and branch request requirements. * Added documentations pull and branch request requirements. * Added documentations pull and branch request requirements. * Tests should now be passing * Tests should now be passing * Tests should now be passing * Tests should now be passing * Tests should now be passing * Tests should now be passing * Tests should now be passing * Tests should now be passing * Add safety * Add safety * Add safety * Added the action on pull_request_target * Added the action on pull_request_target * Added the action on pull_request_target * Added a pytest coverage report * Added a pytest coverage report * Added a pytest coverage report * Added a pytest coverage report * Added a pytest coverage report * Added a build step * Added a build step * Added a build step * Added a build step * Changed the lint action to work only on python changes. * Changed the lint action to work only on python changes. * Changed the lint action to work only on python changes. * Added the ability to compile a html report * Added the ability to compile a html report * Added the ability to compile a html report * Added the ability to compile a html report * Added the ability to compile a html report * Added the ability to compile a html report * Added the ability to compile a html report * Added the ability to compile a html report * Added the ability to compile a html report * Added the ability to compile a html report * Added the ability to compile a html report * Added the ability to compile a html report * Added the ability to compile a html report * Added the ability to compile a html report * Added the ability to compile a html report * Coverage * Finished test and build workflow * Finished test and build workflow * Finished test and build workflow * Finished test and build workflow * Finished test and build workflow * Finished test and build workflow * Finished test and build workflow * Finished test and build workflow * Finished test and build workflow * Finished test and build workflow * Finished test and build workflow * Finished test and build workflow * Finished test and build workflow * Finished test and build workflow * Finished test and build workflow * Finished test and build workflow * Finished test and build workflow * Finished test and build workflow * Finished test and build workflow * Finished test and build workflow * Repaired a bug. * Repaired a bug. * Repaired a bug. * Repaired a bug. * Repaired a bug. * Added a github branch.ref * Removed a poetry install * Docbuild now excludes templates * Added the seminarpräsentation to the documentation build * Added the seminarpräsentation to the documentation build * Added the seminarpräsentation to the documentation build * dded a few images * Changed the pre-commit image * Changed the pre-commit image * Presentation done * Never executing jupyter for sphinx * Never executing jupyter for sphinx * Never executing jupyter for sphinx * Never executing jupyter for sphinx * Never executing jupyter for sphinx
13 KiB
13 KiB
In [54]:
from typing import Final
import numpy as np
import pandas as pd
In [1]:
from typing import Final
import numpy as np
import pandas as pd
number_of_entries = 100
number_of_contacts = 10
ids: Final = [_ for _ in range(number_of_entries)]
companies = pd.DataFrame(columns=[], index=pd.Index(ids, name="company_id"))
companies
id1 = (
pd.Series(ids * number_of_contacts, name="Company 1")
.sample(frac=0.7, random_state=42)
.reset_index(drop=True)
)
id2 = (
pd.Series(ids * number_of_contacts, name="Company 2")
.sample(frac=0.7, random_state=43)
.reset_index(drop=True)
)
connections = (
pd.DataFrame(
[
id1,
pd.Series(
np.random.randint(0, 100, size=(max(len(id1), len(id2)))),
name="Connection Weight",
),
id2,
]
)
.T.dropna()
.astype(int)
)
connections = connections.loc[(connections["Company 1"] != connections["Company 2"])]
connections
Out[1]:
In [69]:
id1 = (
pd.Series(ids * number_of_contacts, name="Company 1")
.sample(frac=0.7, random_state=42)
.reset_index(drop=True)
)
id2 = (
pd.Series(ids * number_of_contacts, name="Company 2")
.sample(frac=0.7, random_state=43)
.reset_index(drop=True)
)
connections = (
pd.DataFrame(
[
id1,
pd.Series(
np.random.randint(0, 100, size=(max(len(id1), len(id2)))),
name="Connection Weight",
),
id2,
]
)
.T.dropna()
.astype(int)
)
connections = connections.loc[(connections["Company 1"] != connections["Company 2"])]
connections
Out[69]:
In [73]:
connections[["Company 1", "Company 2"]].groupby("Company 1").count()
Out[73]:
In [72]:
companies["Analysis-d0"] = 1
companies["Analysis-d1"] = connections[["Company 1", "Company 2"]].groupby("Company 1").count()
connection_sum = connections.join(connections.set_index("Company 2"), on=)
companies["Analysis-d1"] = connections[["Company 1", "Company 2"]].groupby("Company 1").count()
# for tiers in range(5):
companies
Out[72]:
In [ ]:
companies
In [ ]: