github-actions[bot] 1c03fb48da
build(deps): bump transformers from 4.35.2 to 4.36.0 (#449)
Bumps [transformers](https://github.com/huggingface/transformers) from
4.35.2 to 4.36.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/huggingface/transformers/releases">transformers's
releases</a>.</em></p>
<blockquote>
<h2>v4.36: Mixtral, Llava/BakLlava, SeamlessM4T v2, AMD ROCm, F.sdpa
wide-spread support</h2>
<h2>New model additions</h2>
<h3>Mixtral</h3>
<p>Mixtral is the new open-source model from Mistral AI announced by the
blogpost <a href="https://mistral.ai/news/mixtral-of-experts/">Mixtral
of Experts</a>. The model has been proven to have comparable
capabilities to Chat-GPT according to the benchmark results shared on
the release blogpost.</p>
<!-- raw HTML omitted -->
<p>The architecture is a sparse Mixture of Experts with Top-2 routing
strategy, similar as <code>NllbMoe</code> architecture in transformers.
You can use it through <code>AutoModelForCausalLM</code> interface:</p>
<pre lang="py"><code>&gt;&gt;&gt; import torch
&gt;&gt;&gt; from transformers import AutoModelForCausalLM,
AutoTokenizer
<p>&gt;&gt;&gt; model =
AutoModelForCausalLM.from_pretrained(&quot;mistralai/Mixtral-8x7B&quot;,
torch_dtype=torch.float16, device_map=&quot;auto&quot;)
&gt;&gt;&gt; tokenizer =
AutoTokenizer.from_pretrained(&quot;mistralai/Mistral-8x7B&quot;)</p>
<p>&gt;&gt;&gt; prompt = &quot;My favourite condiment is&quot;</p>
<p>&gt;&gt;&gt; model_inputs = tokenizer([prompt],
return_tensors=&quot;pt&quot;).to(device)
&gt;&gt;&gt; model.to(device)</p>
<p>&gt;&gt;&gt; generated_ids = model.generate(**model_inputs,
max_new_tokens=100, do_sample=True)
&gt;&gt;&gt; tokenizer.batch_decode(generated_ids)[0]
</code></pre></p>
<p>The model is compatible with existing optimisation tools such Flash
Attention 2, <code>bitsandbytes</code> and PEFT library. The checkpoints
are release under <a
href="https://huggingface.co/mistralai"><code>mistralai</code></a>
organisation on the Hugging Face Hub.</p>
<h3>Llava / BakLlava</h3>
<p>Llava is an open-source chatbot trained by fine-tuning LlamA/Vicuna
on GPT-generated multimodal instruction-following data. It is an
auto-regressive language model, based on the transformer architecture.
In other words, it is an multi-modal version of LLMs fine-tuned for chat
/ instructions.</p>
<!-- raw HTML omitted -->
<p>The Llava model was proposed in <a
href="https://arxiv.org/pdf/2310.03744">Improved Baselines with Visual
Instruction Tuning</a> by Haotian Liu, Chunyuan Li, Yuheng Li and Yong
Jae Lee.</p>
<ul>
<li>[<code>Llava</code>] Add Llava to transformers by <a
href="https://github.com/younesbelkada"><code>@​younesbelkada</code></a>
in <a
href="https://redirect.github.com/huggingface/transformers/issues/27662">#27662</a></li>
<li>[LLaVa] Some improvements by <a
href="https://github.com/NielsRogge"><code>@​NielsRogge</code></a> in <a
href="https://redirect.github.com/huggingface/transformers/issues/27895">#27895</a></li>
</ul>
<p>The integration also includes <a
href="https://github.com/SkunkworksAI/BakLLaVA"><code>BakLlava</code></a>
which is a Llava model trained with Mistral backbone.</p>
<p>The mode is compatible with <code>&quot;image-to-text&quot;</code>
pipeline:</p>
<pre lang="py"><code>from transformers import pipeline
from PIL import Image    
import requests
<p>model_id = &quot;llava-hf/llava-1.5-7b-hf&quot;
&lt;/tr&gt;&lt;/table&gt;
</code></pre></p>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="14666775a2"><code>1466677</code></a>
Release: v4.36.0</li>
<li><a
href="accccdd008"><code>accccdd</code></a>
[<code>Add Mixtral</code>] Adds support for the Mixtral MoE (<a
href="https://redirect.github.com/huggingface/transformers/issues/27942">#27942</a>)</li>
<li><a
href="0676d992a5"><code>0676d99</code></a>
[<code>from_pretrained</code>] Make from_pretrained fast again (<a
href="https://redirect.github.com/huggingface/transformers/issues/27709">#27709</a>)</li>
<li><a
href="9f18cc6df0"><code>9f18cc6</code></a>
Fix SDPA dispatch &amp; make SDPA CI compatible with torch&lt;2.1.1 (<a
href="https://redirect.github.com/huggingface/transformers/issues/27940">#27940</a>)</li>
<li><a
href="7ea21f1f03"><code>7ea21f1</code></a>
[LLaVa] Some improvements (<a
href="https://redirect.github.com/huggingface/transformers/issues/27895">#27895</a>)</li>
<li><a
href="5e620a92cf"><code>5e620a9</code></a>
Fix <code>SeamlessM4Tv2ModelIntegrationTest</code> (<a
href="https://redirect.github.com/huggingface/transformers/issues/27911">#27911</a>)</li>
<li><a
href="e96c1de191"><code>e96c1de</code></a>
Skip <code>UnivNetModelTest::test_multi_gpu_data_parallel_forward</code>
(<a
href="https://redirect.github.com/huggingface/transformers/issues/27912">#27912</a>)</li>
<li><a
href="8d8970efdd"><code>8d8970e</code></a>
[BEiT] Fix test (<a
href="https://redirect.github.com/huggingface/transformers/issues/27934">#27934</a>)</li>
<li><a
href="235be08569"><code>235be08</code></a>
[DETA] fix backbone freeze/unfreeze function (<a
href="https://redirect.github.com/huggingface/transformers/issues/27843">#27843</a>)</li>
<li><a
href="df5c5c62ae"><code>df5c5c6</code></a>
Fix typo (<a
href="https://redirect.github.com/huggingface/transformers/issues/27918">#27918</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/huggingface/transformers/compare/v4.35.2...v4.36.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=transformers&package-manager=pip&previous-version=4.35.2&new-version=4.36.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>
2023-12-12 19:21:28 +01:00
2023-12-04 19:05:52 +01:00
2023-12-04 19:05:52 +01:00

aki_prj23_transparenzregister

python Actions status Ruff pre-commit Checked with mypy Documentation Status Code style: black

Contributions

See the CONTRIBUTING.md about how code should be formatted and what kind of rules we set ourselves.

Available entrypoints

The project has currently the following entrypoint available:

  • data-transformation > Transfers all the data from the mongodb into the sql db to make it available as production data.
  • data-processing > Processes the data using NLP methods and transfers matched data into the SQL table ready for use.
  • reset-sql > Resets all sql tables in the connected db.
  • copy-sql > Copys the content of a db to another db.
  • webserver > Starts the webserver showing the analysis results.

DB Connection settings

To connect to the SQL db see sql/connector.py To connect to the Mongo db see [connect]

Create a secrets.json in the root of this repo with the following structure (values to be replaces by desired config):

The sqlite db is alternative to the postgres section.

{
  "sqlite": "path-to-sqlite.db",
  "postgres": {               
    "username": "username",      
    "password": "password",
    "host": "localhost",
    "database": "db-name",
    "port": 5432
  },
  "mongo": {
    "username": "username",
    "password": "password",
    "host": "localhost",
    "database": "transparenzregister",
    "port": 27017
  }
}

Alternatively, the secrets can be provided as environment variables. One option to do so is to add a .env file with the following layout:

PYTHON_POSTGRES_USERNAME=postgres
PYTHON_POSTGRES_PASSWORD=postgres
PYTHON_POSTGRES_HOST=postgres
PYTHON_POSTGRES_DATABASE=postgres
PYTHON_POSTGRES_PORT=5432

PYTHON_MONGO_USERNAME=username
PYTHON_MONGO_HOST=mongodb
PYTHON_MONGO_PASSWORD=password
PYTHON_MONGO_PORT=27017
PYTHON_MONGO_DATABASE=transparenzregister

PYTHON_SQLITE_PATH=PathToSQLite3.db # An overwrite path to an sqllite db

PYTHON_DASH_LOGIN_USERNAME=some-login-to-webgui
PYTHON_DASH_LOGIN_PW=some-pw-to-login-to-webgui

PYTHON_INGEST_SCHEDULE=12 # Every x hours

CR=ghcr.io/fhswf/aki_prj23_transparenzregister
TAG=latest

HTTP_PORT=80

The prefix PYTHON_ can be customized by setting a different prefix when constructing the ConfigProvider.

Description
No description provided
Readme 138 MiB
Languages
Jupyter Notebook 84.5%
HTML 12.1%
Python 3.3%