Skip to content

Commit 163f655

Browse files
author
Dr. Min Ye
committed
genAI ipynbs
1 parent 65fa6bc commit 163f655

File tree

6 files changed

+982
-5
lines changed

6 files changed

+982
-5
lines changed

docs/2deep_ml_ops/examples-levels.rst

Whitespace-only changes.

docs/4gen_ai/genai_llm.rst

Lines changed: 252 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,252 @@
1+
Einführung zu LLM (Large Language Models)
2+
=========================================
3+
4+
- **LLM-Lingo:** Must-Know Terms
5+
6+
https://github.com/aishwaryanr/awesome-generative-ai-guide/blob/main/resources/llm_lingo/llm_lingo_p1.pdf
7+
8+
https://python-basics-tutorial.readthedocs.io/en/latest/_sources/appendix/glossary.rst.txt
9+
10+
11+
LLM-Lingo Baseline + Fine-Tuning
12+
------------------------------------
13+
14+
Must-Know Terms sind:
15+
16+
- **Foundation Model:**
17+
asdf
18+
19+
- **Transformer:**
20+
adf
21+
22+
- **Prompting:**
23+
24+
asdf
25+
26+
- **Context-Length:**
27+
28+
asdf
29+
30+
- **Few-Shot Learning** vs **Zero-Shot Learning** vs **In-Context Learning:**
31+
32+
asdf
33+
34+
- **RAG (Retrieval-Augmented Generation):**
35+
36+
asdf
37+
38+
- **Knowledge Base (KB):**
39+
40+
asdf
41+
42+
- **Vector Database:**
43+
44+
adf
45+
46+
- **Fine-Tuning:**
47+
48+
asdf
49+
50+
- **Instruction Tuning:**
51+
52+
asf
53+
54+
- **Hallucination:**
55+
56+
asfd
57+
58+
- **SFT (Supervised Fine-Tuning):**
59+
60+
asdf
61+
62+
- **Contrastive Learning:**
63+
64+
asdf
65+
66+
- **Pruning:**
67+
68+
asdf
69+
70+
LLM-Lingo: RAG + LLM Agents
71+
-----------------------------
72+
73+
- **Knowledge Base (KB):**
74+
75+
asdf
76+
77+
- **Chunking:**
78+
79+
asdf
80+
81+
- **Indexing:**
82+
83+
asdf
84+
85+
- **Embedded Model:**
86+
87+
asdf
88+
89+
- **Vector Database:**
90+
91+
asdf
92+
93+
- **Vector Search:**
94+
95+
asdf
96+
97+
- **Retrieval:**
98+
99+
asdf
100+
101+
- **AGI (Artificial General Intelligence):**
102+
103+
asdf
104+
105+
- **LLM Agent:**
106+
107+
asdf
108+
109+
- **Agent Memory:**
110+
111+
asdf
112+
113+
- **Agent Planning:**
114+
115+
asdf
116+
117+
- **Function Calling:**
118+
119+
asdf
120+
121+
LLM Lingo: Enterprise Ready LLMs
122+
---------------------------------
123+
124+
- **LLM Bias:**
125+
126+
asdf
127+
128+
- **XAI:**
129+
130+
Explainable AI
131+
132+
- **Responsible AI:**
133+
134+
asdf
135+
136+
- **AI Governance:**
137+
138+
asdf
139+
140+
141+
- **Compliance:**
142+
143+
asdf
144+
145+
- **GDPR:**
146+
147+
asdf
148+
149+
- **Alignment:**
150+
151+
asdf
152+
153+
- **Model Ethics:**
154+
155+
asdf
156+
157+
- **PII:**
158+
159+
Personally Identifiable Information
160+
161+
- **LLMOps:**
162+
163+
asdf
164+
165+
LLM-Lingo: LLM Vulnerabilities and Attacks
166+
-------------------------------------------
167+
168+
- **Adversarial Attacks:**
169+
170+
asdf
171+
172+
- **Black-Box Attacks:**
173+
174+
asdf
175+
176+
- **White-Box Attacks:**
177+
178+
asdf
179+
180+
- **Vulnerability:**
181+
182+
asdf
183+
184+
- **Deep Fakes:**
185+
186+
asdf
187+
188+
- **Jailbreaking:**
189+
190+
asdf
191+
192+
- **Prompt Injection:**
193+
194+
asdf
195+
196+
- **Prompt Leaking:**
197+
198+
asdf
199+
200+
- **Red-Teaming:**
201+
202+
asdf
203+
204+
- **Robustness:**
205+
206+
asdf
207+
208+
- **Alignment:**
209+
210+
asdf
211+
212+
- **Watermarking:**
213+
214+
asdf
215+
216+
Learning Paradigms
217+
-------------------
218+
219+
220+
- **Unsupervised Learning:**
221+
222+
- **Supervised Learning:**
223+
224+
- **Reinforcement Learning:**
225+
226+
- **Meta-Learning:**
227+
228+
- **Multi-Task Learning:**
229+
230+
- **Zero-Shot Learning:**
231+
232+
- **Few-Shot Learning:**
233+
234+
- **Online Learning:**
235+
236+
asdf
237+
238+
- **Continual Learning:**
239+
240+
asdf
241+
242+
- **Federated Learning:**
243+
244+
asdf
245+
246+
- **Adversarial Learning:**
247+
248+
asdf
249+
250+
- **Active Learning:**
251+
252+
asdf

docs/4gen_ai/genai_theory.rst

Lines changed: 17 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -105,9 +105,16 @@ Ein typischer Transformer-Block besteht somit aus:
105105

106106
4. Add & Norm (erneut Residual-Verbindung plus Layer-Normalization)
107107

108-
Transformer-wikipedia: https://de.wikipedia.org/wiki/Transformer_%28Maschinelles_Lernen%29#/media/Datei:Transformer,_full_architecture.png
109108

110-
Transformer-diagram: https://raw.githubusercontent.com/dvgodoy/dl-visuals/main/Transformers/full_transformer.png
109+
.. figure:: ../_static/images/04_transformer_diagram.png
110+
:alt: Transformer-Modellarchitektur
111+
:align: center
112+
:width: 700px
113+
114+
**Abbildung 1:** Transformer-Modellarchitektur mit originaler Position der Layer-Normalisierung. [#]_ [#]_
115+
116+
117+
111118

112119

113120
Optimierungsalgorithmen
@@ -168,4 +175,11 @@ Die Parameter werden dann aktualisiert mittels:
168175
169176
Adam vereint somit die Vorteile von AdaGrad und RMSProp und ist weit verbreitet, weil es die Lernraten dynamisch anpasst und stabile Konvergenzen auch in tiefen Netzwerken ermöglicht.
170177

171-
Die Wahl und Konfiguration des Optimierungsalgorithmus ist entscheidend für die Trainingsdynamik und die Leistungsfähigkeit des endgültigen Modells.
178+
Die Wahl und Konfiguration des Optimierungsalgorithmus ist entscheidend für die Trainingsdynamik und die Leistungsfähigkeit des endgültigen Modells.
179+
180+
----
181+
182+
.. rubric:: Footnotes
183+
184+
.. [#] Transformer-wikipedia: https://de.wikipedia.org/wiki/Transformer_%28Maschinelles_Lernen%29#/media/Datei:Transformer,_full_architecture.png
185+
.. [#] Transformer-diagram: https://raw.githubusercontent.com/dvgodoy/dl-visuals/main/Transformers/full_transformer.png

docs/4gen_ai/index.rst

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -47,8 +47,10 @@ https://de.wikipedia.org/wiki/Tay_(Bot)
4747

4848
genai_intro
4949
genai_theory
50-
genai_infrastructure
51-
cusy_genai
50+
genai_infrastructure
51+
genai_llm
52+
llm_1
53+
llm_2
5254
genai_agents
5355
regulatory
5456
abschluss

0 commit comments

Comments
 (0)