-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathsearch.json
More file actions
296 lines (296 loc) · 93 KB
/
search.json
File metadata and controls
296 lines (296 loc) · 93 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
[
{
"objectID": "about-no.html",
"href": "about-no.html",
"title": "Vladimir Shevchenko",
"section": "",
"text": "Dataanalytiker & IT-spesialist\n\n\n Telegram E-post\n\n Last ned CV\n\n\n\nKort info\n\n\n\n Sted: Bjørnafjorden, Norge\n\n\n Født: 1997\n\n\n Status: Åpen for muligheter\n\n\n Førerkort: Klasse B\n\n\n\n\n\nSpråk\n\n\n \n Norsk\n Middels\n \n \n \n \n\n\n \n Engelsk\n Grunnleggende\n \n \n \n \n\n\n \n Ukrainsk\n Morsmål\n \n \n \n \n\n\n \n Russisk\n Morsmål\n \n \n \n \n\n\n\n\n\n\nSammendrag\n\n\nDataanalytiker og IT-spesialist med solid erfaring innen prosessautomatisering, rapporteringssystemer og risikoanalyse.\n\n\nJeg spesialiserer meg på å transformere komplekse forretningsprosesser til automatiserte løsninger ved hjelp av Python, R og SQL. Min bakgrunn spenner over økonomi, risikostyring i bank og teknisk infrastruktur, noe som gir meg et unikt perspektiv på både forretningsbehov og teknisk implementering.\n\n\nJeg verdsetter ærlighet, ansvar og presisjon i arbeidet mitt. Jeg er lærevillig, nysgjerrig og åpen for å tilegne meg nye ferdigheter. På fritiden liker jeg å være ute i naturen og tilbringe tid med familie og venner.\n\n\n\n\nArbeidserfaring\n\n\n\n\n\n\n\nØkonom\n\n\n Melitopol Meierifabrikk, Ukraina\n\n\n Jun 2023 – Des 2023\n\n\nIT-spesialist med økonomifokus, ansvarlig for å konsolidere avdelingsdata og bygge enhetlig rapporteringsinfrastruktur.\n\n\n\nViktige prestasjoner:\n\n\n\nUtviklet omfattende system for kostnadsberegning av produkter i R, analyserte alle fabrikkavdelinger inkludert innkjøpspriser, produktsammensetning, lønninger, logistikk, drift og utstyrsavskrivning\n\n\nBygde enhetlig rapporteringssystem med SQLite-database, konsoliderte fragmenterte data fra flere avdelinger og 1C-systemer\n\n\nAutomatiserte salgsvisualisering og analyse ved hjelp av R, muliggjorde dynamiske rapporter for enhver tidsperiode\n\n\nImplementerte historisk sporing av lager og salg i pengeverdi, ga øyeblikkelig tilgang til tidligere data\n\n\nGjennomførte konkurrent- og butikklokasjonsanalyse, identifiserte salgsmønstre og kundepreferanser\n\n\n\n\nPython R SQL SQLite 1C Datavisualisering\n\n\n\n\n\n\n\n\nRisikostyringsspesialist\n\n\n Forward Bank LLC, Ukraina\n\n\n Nov 2020 – Okt 2022\n\n\nAnsvarlig for risikoanalyse, kundeverifisering og rapporteringsautomatisering på tvers av 5 bankprodukter.\n\n\n\nViktige prestasjoner:\n\n\n\nAutomatiserte store månedlige risikorapporter (salg, godkjenningsrater, mislighold) ved å migrere fra Excel til R, håndterte flere produkter og kundesegmenter\n\n\nUtviklet automatisert kundeverifikasjonssystem som sjekker svartelister, kreditthistoriedatabaser (BKI/PTI) og inntekts-/utgiftsanalyse\n\n\nOpprettet tilpassede valideringsskript basert på ledelseskrav, oversatte forretningsregler til automatiserte kodekontroller\n\n\nDesignet og implementerte nye risikometrikker i samarbeid med avdelingsleder, vellykket integrert i arbeidsflyt\n\n\nMigrerte flertallet av Excel-baserte oppgaver til R + SQL, oppnådde full automatisering for de fleste prosesser\n\n\n\n\nR SQL Oracle DB Databehandling Risikoanalyse\n\n\n\n\n\n\n\n\nTeknisk spesialist\n\n\n Band, Ukraina\n\n\n 2016 – 2018\n\n\nLeverte VPN-infrastruktur og teknisk støtte til ~10 brukere, vedlikeholdt virtuelle maskiner og sikret uavbrutt tjeneste.\n\n\n\nViktige prestasjoner:\n\n\n\nAdministrerte og vedlikeholdt 10 virtuelle maskiner (Windows/VirtualBox) med VPN-konfigurasjon, sikret uavbrutt drift for brukere\n\n\nMigrerte VM-er fra Windows til Linux på Google Cloud Platform, reduserte ressurskostnader betydelig og muliggjorde flere instanser per samme budsjett\n\n\nKonfigurerte lettveis programvare (f.eks. Midori-nettleser) for RAM-begrensede miljøer\n\n\nLeverte ekstern teknisk støtte for brukere på tvers av Windows- og Linux-plattformer\n\n\n\n\nLinux Windows VirtualBox Google Cloud Platform VPN\n\n\n\n\n\n\n\nUtdanning\n\n\n\nBachelor i datavitenskap\n\n 2016 – 2020\n\n\n Dmytro Motornyi Tavria State Agrotechnological University\n\n\nInformasjonsteknologi, Heltid\n\n\n\n\n\n\n\n Back to top"
},
{
"objectID": "about-ua.html",
"href": "about-ua.html",
"title": "Володимир Шевченко",
"section": "",
"text": "Аналітик даних та IT-спеціаліст\n\n\n Telegram Email\n\n Завантажити CV\n\n\n\nКоротка інформація\n\n\n\n Місцезнаходження: Бйорнафйорден, Норвегія\n\n\n Дата народження: 1997\n\n\n Статус: Відкритий до пропозицій\n\n\n Водійські права: Категорія B\n\n\n\n\n\nМови\n\n\n \n Норвезька\n Середній\n \n \n \n \n\n\n \n Англійська\n Базовий\n \n \n \n \n\n\n \n Українська\n Рідна\n \n \n \n \n\n\n \n Російська\n Рідна\n \n \n \n \n\n\n\n\n\n\nРезюме\n\n\nАналітик даних та IT-спеціаліст з багатим досвідом в автоматизації процесів, системах звітності та аналізі ризиків.\n\n\nСпеціалізуюся на трансформації складних бізнес-процесів в автоматизовані рішення з використанням Python, R та SQL. Мій досвід охоплює економіку, управління ризиками в банківській сфері та технічну інфраструктуру, що дає мені унікальне розуміння як бізнес-потреб, так і технічної реалізації.\n\n\nЦіную чесність, відповідальність та точність у роботі. Я відкритий до навчання, допитливий та готовий освоювати нові навички. У вільний час люблю проводити час на природі, з сім’єю та друзями.\n\n\n\n\nДосвід роботи\n\n\n\n\n\n\n\nЕкономіст\n\n\n Мелітопольський молочний завод, Україна\n\n\n Червень 2023 – Грудень 2023\n\n\nIT-спеціаліст з економічним ухилом, відповідальний за консолідацію даних підрозділів та побудову єдиної інфраструктури звітності.\n\n\n\nКлючові досягнення:\n\n\n\nРозробив комплексну систему розрахунку собівартості продукції на R, проаналізував усі підрозділи заводу, включаючи закупівельні ціни, склад продуктів, зарплати, логістику, комунальні витрати та амортизацію обладнання\n\n\nПобудував єдину систему звітності з базою даних SQLite, консолідувавши фрагментовані дані з декількох відділів та систем 1C\n\n\nАвтоматизував візуалізацію продажів та аналітику за допомогою R, забезпечивши динамічні звіти за будь-який період часу\n\n\nВпровадив історичне відстеження запасів та продажів у грошовому вираженні, забезпечивши миттєвий доступ до минулих даних\n\n\nПровів аналіз конкурентів та розташування магазинів, виявив закономірності продажів та переваги клієнтів\n\n\n\n\nPython R SQL SQLite 1C Візуалізація даних\n\n\n\n\n\n\n\n\nСпеціаліст з управління ризиками\n\n\n Forward Bank LLC, Україна\n\n\n Листопад 2020 – Жовтень 2022\n\n\nВідповідальний за аналіз ризиків, верифікацію клієнтів та автоматизацію звітності по 5 банківських продуктах.\n\n\n\nКлючові досягнення:\n\n\n\nАвтоматизував масштабні щомісячні звіти з ризиків (продажі, коефіцієнти схвалення, дефолти) шляхом міграції з Excel на R, обробляючи безліч продуктів та клієнтських сегментів\n\n\nРозробив автоматизовану систему верифікації клієнтів з перевіркою чорних списків, баз кредитних історій (БКІ/ПТІ) та аналізом доходів/витрат\n\n\nСтворив користувацькі скрипти перевірки на основі вимог керівництва, перетворюючи бізнес-правила в автоматизовані перевірки коду\n\n\nРозробив та впровадив нові метрики ризиків у співпраці з керівником відділу, успішно інтегрував у робочий процес\n\n\nПереклав більшість задач на основі Excel на R + SQL, досягнувши повної автоматизації більшості процесів\n\n\n\n\nR SQL Oracle DB Обробка даних Аналіз ризиків\n\n\n\n\n\n\n\n\nТехнічний спеціаліст\n\n\n Band, Україна\n\n\n 2016 – 2018\n\n\nЗабезпечував VPN-інфраструктуру та технічну підтримку для ~10 користувачів, обслуговував віртуальні машини та забезпечував безперебійну роботу сервісу.\n\n\n\nКлючові досягнення:\n\n\n\nКерував та обслуговував 10 віртуальних машин (Windows/VirtualBox) з конфігурацією VPN, забезпечуючи безперебійну роботу для користувачів\n\n\nПереклав віртуальні машини з Windows на Linux на Google Cloud Platform, значно знизивши витрати на ресурси та збільшивши кількість екземплярів в рамках того ж бюджету\n\n\nНалаштував полегшене програмне забезпечення (наприклад, браузер Midori) для середовищ з обмеженою оперативною пам’яттю\n\n\nЗабезпечував віддалену технічну підтримку користувачів на платформах Windows та Linux\n\n\n\n\nLinux Windows VirtualBox Google Cloud Platform VPN\n\n\n\n\n\n\n\nОсвіта\n\n\n\nБакалавр комп’ютерних наук\n\n 2016 – 2020\n\n\n Таврійський державний агротехнологічний університет імені Дмитра Моторного\n\n\nІнформаційні технології, Денна форма\n\n\n\n\n\n\n\n Back to top"
},
{
"objectID": "about.html",
"href": "about.html",
"title": "Vladimir Shevchenko",
"section": "",
"text": "Data Analyst & IT Specialist\n\n\n Telegram Email\n\n Download CV\n\n\n\nQuick Info\n\n\n\n Location: Bjørnafjorden, Norway\n\n\n Born: 1997\n\n\n Status: Open to opportunities\n\n\n Driver License: Category B\n\n\n\n\n\nLanguages\n\n\n \n Norwegian\n Intermediate\n \n \n \n \n\n\n \n English\n Basic\n \n \n \n \n\n\n \n Ukrainian\n Native\n \n \n \n \n\n\n \n Russian\n Native\n \n \n \n \n\n\n\n\n\n\nSummary\n\n\nData analyst and IT specialist with strong experience in process automation, reporting systems, and risk analytics.\n\n\nI specialize in transforming complex business processes into automated solutions using Python, R, and SQL. My background spans economics, banking risk management, and technical infrastructure, giving me a unique perspective on both business needs and technical implementation.\n\n\nI value honesty, responsibility, and precision in my work. I’m eager to learn, curious, and open to acquiring new skills. In my free time, I enjoy being outdoors and spending time with family and friends.\n\n\n\n\nWork Experience\n\n\n\n\n\n\n\nEconomist\n\n\n Melitopol Milk Factory, Ukraine\n\n\n Jun 2023 – Dec 2023\n\n\nIT specialist with economics focus, responsible for consolidating departmental data and building unified reporting infrastructure.\n\n\n\nKey Achievements:\n\n\n\nDeveloped comprehensive product cost calculation system in R, analyzing all factory divisions including procurement prices, product composition, salaries, logistics, utilities, and equipment depreciation\n\n\nBuilt unified reporting system with SQLite database, consolidating fragmented data from multiple departments and 1C systems\n\n\nAutomated sales visualization and analytics using R, enabling dynamic reports for any time period\n\n\nImplemented historical tracking of inventory and sales in monetary terms, providing instant access to past data\n\n\nConducted competitor and store location analysis, identifying sales patterns and customer preferences\n\n\n\n\nPython R SQL SQLite 1C Data Visualization\n\n\n\n\n\n\n\n\nRisk Management Specialist\n\n\n Forward Bank LLC, Ukraine\n\n\n Nov 2020 – Oct 2022\n\n\nResponsible for risk analysis, client verification, and reporting automation across 5 banking products.\n\n\n\nKey Achievements:\n\n\n\nAutomated large-scale monthly risk reports (sales, approval rates, defaults) by migrating from Excel to R, handling multiple products and client segments\n\n\nDeveloped automated client verification system checking blacklists, credit history databases (BKI/PTI), and income/expense analysis\n\n\nCreated custom validation scripts based on management requirements, translating business rules into automated code checks\n\n\nDesigned and implemented new risk metrics in collaboration with department head, successfully integrated into workflow\n\n\nMigrated majority of Excel-based tasks to R + SQL, achieving full automation for most processes\n\n\n\n\nR SQL Oracle DB Data Processing Risk Analytics\n\n\n\n\n\n\n\n\nTechnical Specialist\n\n\n Band, Ukraine\n\n\n 2016 – 2018\n\n\nProvided VPN infrastructure and technical support for ~10 users, maintaining virtual machines and ensuring uninterrupted service.\n\n\n\nKey Achievements:\n\n\n\nManaged and maintained 10 virtual machines (Windows/VirtualBox) with VPN configuration, ensuring uninterrupted operation for users\n\n\nMigrated VMs from Windows to Linux on Google Cloud Platform, significantly reducing resource costs and enabling more instances per same budget\n\n\nConfigured lightweight software (e.g., Midori browser) for RAM-constrained environments\n\n\nProvided remote technical support for users across Windows and Linux platforms\n\n\n\n\nLinux Windows VirtualBox Google Cloud Platform VPN\n\n\n\n\n\n\n\nEducation\n\n\n\nBachelor in Computer Science\n\n 2016 – 2020\n\n\n Dmytro Motornyi Tavria State Agrotechnological University\n\n\nInformation Technology, Full-time\n\n\n\n\n\n\n\n Back to top"
},
{
"objectID": "portfolio/index.html",
"href": "portfolio/index.html",
"title": "Portfolio",
"section": "",
"text": "The page contains visual and text examples of interesting results.\n\n\n\n \n \n \n Order By\n Default\n \n Date - Oldest\n \n \n Date - Newest\n \n \n Title\n \n \n \n \n \n \n \n\n\n\n\n\n\n\n\nInteractive Todo.txt Tree Viewer\n\n18 min\n\n\nPython\n\nTUI\n\n\n\n\n14 March 2026\n\n\n\n\n\n\n\n\n\n\n\nBinol vs Olive Sales Analysis\n\n5 min\n\n\nPython\n\nDuckDB\n\n\n\n\n14 December 2024\n\n\n\n\n\n\n\n\n\n\n\nOCR Reader\n\n4 min\n\n\nPython\n\nOCR\n\n\n\n\n06 December 2024\n\n\n\n\n\n\n\n\n\n\n\nWeb Scraping Job Vacancies\n\n10 min\n\n\nPython\n\nWeb-Scraping\n\n\n\n\n20 July 2024\n\n\n\n\n\n\n\n\n\n\n\nBeautiful Export to Excel (xlsx)\n\n5 min\n\n\nR\n\nExcel\n\n\n\n\n10 July 2024\n\n\n\n\n\n\n\n\n\n\n\nCheese Sales Analysis\n\n2 min\n\n\nR\n\nGraphs\n\n\n\n\n06 July 2024\n\n\n\n\n\n\nNo matching items\n\n\n\n\n\n Back to top"
},
{
"objectID": "portfolio/web-scraping-hh/web-scraping-hh.html",
"href": "portfolio/web-scraping-hh/web-scraping-hh.html",
"title": "Web Scraping Job Vacancies",
"section": "",
"text": "This code demonstrates how to programmatically extract information about job vacancies from hh.ru. The script scans the primary search results page, iterates through aggregated vacancies, navigates to each individual job posting, and selectively extracts relevant data points like salary, title, and required tech stacks."
},
{
"objectID": "portfolio/web-scraping-hh/web-scraping-hh.html#imports-and-setup",
"href": "portfolio/web-scraping-hh/web-scraping-hh.html#imports-and-setup",
"title": "Web Scraping Job Vacancies",
"section": "Imports and Setup",
"text": "Imports and Setup\nThe core libraries for this parsing task are:\n1. requests: Handles the HTTP requests to communicate with the website and download the raw HTML.\n2. bs4 (BeautifulSoup): An HTML parsing tool that traverses the raw HTML DOM string returned by requests, allowing us to search for specific tags and classes.\n3. polars: High-performance data manipulation to store and prepare the scraped data.\n4. xlsxwriter: Exports our final Polars DataFrame to a styled Excel spreadsheet.\n\nimport requests\nfrom bs4 import BeautifulSoup\nimport polars as pl\nimport re\nimport xlsxwriter\nimport time\n\n# 1. Initialize an empty Polars DataFrame defining the schema of our output\ndf = pl.DataFrame({ \n \"URL\": pl.Series([], dtype=pl.Utf8),\n \"Вакансия\": pl.Series([], dtype=pl.Utf8),\n \"Зарплата\": pl.Series([], dtype=pl.Int64),\n \"Keyword\": pl.Series([], dtype=pl.Utf8),\n \"Tags\": pl.Series([], dtype=pl.Utf8)\n}) \n\nquery = \"excel\" \n\n# 2. Construct the root URL based on manual website filters. \n# Changing filters via the UI, appending the desired parameters to this string.\nstart_url = \"https://hh.ru/search/vacancy?experience=between1And3\" \\\n \"&order_by=publication_time&ored_clusters=true\" \\\n f\"&schedule=remote&text={query}&search_period=7\" \n\nurl = start_url \n\n# 3. Define custom headers to simulate a legitimate browser request.\n# Websites often block requests utilizing default python-requests user agents.\nheaders = { \n \"User-Agent\": \"Mozilla/5.0 (Windows NT 11.0; Win64; x64) \" \\\n \"AppleWebKit/538.33 (KHTML, like Gecko) Chrome/98.0.4472.124 Safari/537.36\",\n \"Accept-Language\": \"ru-RU,ru;q=0.9\",\n \"Accept-Encoding\": \"gzip, deflate, br\",\n \"Connection\": \"keep-alive\",\n \"Upgrade-Insecure-Requests\": \"1\"\n}\n\n# 4. Define specific keywords or technologies to scan for within the vacancy descriptions.\nkeywords = [\"BPMN\", \"Jira\"]"
},
{
"objectID": "portfolio/web-scraping-hh/web-scraping-hh.html#validating-extracted-pages",
"href": "portfolio/web-scraping-hh/web-scraping-hh.html#validating-extracted-pages",
"title": "Web Scraping Job Vacancies",
"section": "Validating Extracted Pages",
"text": "Validating Extracted Pages\nModern web pages are complicated, and scraping requests occasionally return incomplete or broken domestic DOM trees (due to lazy-loading or server load). This function attempts to re-fetch a page up to 9 times if the core structural identifiers (like the vacancy-title container) fail to render.\n\ndef check_correctly(url):\n for i in range(9): \n response = requests.get(url, headers=headers) \n soup = BeautifulSoup(response.text, 'html.parser') \n\n # We designate the 'vacancy-title' div as our proof of a successful, complete load\n header = soup.find('div', {'class': 'vacancy-title'}) \n \n if header: \n # If successful, pass the soup context to the extraction function\n visit_and_check(url, soup, response) \n break"
},
{
"objectID": "portfolio/web-scraping-hh/web-scraping-hh.html#extracting-the-job-profile",
"href": "portfolio/web-scraping-hh/web-scraping-hh.html#extracting-the-job-profile",
"title": "Web Scraping Job Vacancies",
"section": "Extracting the Job Profile",
"text": "Extracting the Job Profile\nThis function dissects the specific job posting, scraping the title, salary, skills, and checking for our pre-defined keywords.\n\ndef visit_and_check(url, soup, response):\n global df \n\n fkeys = [] \n\n # Scan the raw text of the entire response for our predefined keywords\n for keyword in keywords: \n if keyword in response.text: \n fkeys.append(keyword) \n\n # Extract distinct skill tags located at the bottom of the vacancy page\n skill_elements = soup.find_all('li', {'data-qa': 'skills-element'}) \n skills = [li.find('div', class_=re.compile(r'magritte-tag__label')).text for li in skill_elements] \n\n # Extract Title Container\n header = soup.find('div', {'class': 'vacancy-title'}) \n title = header.find('h1').text \n\n # Salary Extraction Logic\n # Often salaries contain ranges, words, or are omitted entirely.\n salary_str = header.find('span', {'data-qa': 'vacancy-salary-compensation-type-net'}) \n \n if salary_str: \n # Regex to strip out currency symbols, whitespace (\\xa0), and words, catching the first integer\n match = re.search(r'\\d+', salary_str.text.replace('\\xa0', '')) \n if match:\n salary = int(match.group()) \n else:\n salary = None \n else:\n salary = None \n\n # Vertically stack the newly parsed vacancy into our global dataframe\n df = df.vstack(pl.DataFrame({ \n \"URL\": [url], \n \"Вакансия\" : [title], \n \"Зарплата\" : [salary], \n \"Keyword\": [', '.join(fkeys)], \n \"Tags\": [', '.join(skills)]\n }))"
},
{
"objectID": "portfolio/web-scraping-hh/web-scraping-hh.html#iterating-over-the-search-results",
"href": "portfolio/web-scraping-hh/web-scraping-hh.html#iterating-over-the-search-results",
"title": "Web Scraping Job Vacancies",
"section": "Iterating over the Search Results",
"text": "Iterating over the Search Results\nThis loop controls the pagination across the main search results. It iterates over the pages, extracts the individual vacancy URLs sequentially, and hands them off to the validation and extraction functions above.\n\n# Iterate through the first 9 pages of search results\nfor p in range(9): \n if p > 0:\n # Construct the URL for the next paginated slice\n url = f'{start_url}&page={p}' \n print(f\"--- Page {p} --- --- ---\")\n\n # The 9-iteration retry loop for the primary search page layout\n for i in range(9): \n # Crucial: Pause execution to avoid DDoS-ing the target server and risking an IP ban\n time.sleep(3) \n print(f\"--- --- --- Iteration {i}\")\n \n response = requests.get(url, headers=headers) \n soup = BeautifulSoup(response.text, 'html.parser') \n \n # We ensure the root aggregate DOM wrapper (h2 tags in this case) has successfully loaded\n if soup.find_all('h2'): \n for h2 in soup.find_all('h2'): \n for a in h2.find_all('a', href=True): \n # Convert the potentially relative URL inside the href attribute to a full absolute URL\n absolute_url = requests.compat.urljoin(url, a['href']) \n \n # Offload the specific job scraping flow\n check_correctly(absolute_url) \n \n # Break the retry loop once we successfully process all vacancies on this page \n break"
},
{
"objectID": "portfolio/web-scraping-hh/web-scraping-hh.html#exporting-to-excel",
"href": "portfolio/web-scraping-hh/web-scraping-hh.html#exporting-to-excel",
"title": "Web Scraping Job Vacancies",
"section": "Exporting to Excel",
"text": "Exporting to Excel\nAfter populating the global Polars DataFrame, we can dump the contents into a cleanly formatted .xlsx workbook utilizing python’s xlsxwriter.\nwb = xlsxwriter.Workbook('Output.xlsx') \nws = wb.add_worksheet('DF') \n\n# Utilize Polars native excel serialization with formatting options\ndf.write_excel( \n workbook=wb, \n worksheet='DF', \n position=\"A1\", \n table_style=\"Table Style Medium 3\", \n dtype_formats={pl.Date: \"mm/dd/yyyy\"}, \n column_totals={\"score\": \"average\"}, \n float_precision=1, \n autofit=True, \n) \n\n# Manually override the widths of specific columns for readability\nws.set_column('A:A', 10) \nws.set_column('B:B', 40) \nws.set_column('C:C', 10) \nws.set_column('D:D', 20) \nws.set_column('E:E', 90) \n\nwb.close()"
},
{
"objectID": "portfolio/web-scraping-hh/web-scraping-hh.html#result",
"href": "portfolio/web-scraping-hh/web-scraping-hh.html#result",
"title": "Web Scraping Job Vacancies",
"section": "Result",
"text": "Result\nThe resulting Excel file perfectly catalogues the market data for analysis."
},
{
"objectID": "portfolio/export-to-xlsx/export-to-xlsx.html",
"href": "portfolio/export-to-xlsx/export-to-xlsx.html",
"title": "Beautiful Export to Excel (xlsx)",
"section": "",
"text": "NoteOverview\n\n\n\nThis article briefly describes the powerful formatting capabilities of the openxlsx package in R. Below is a practical example demonstrating how to meticulously build, style, and export a data table from R to a highly customized Excel .xlsx file."
},
{
"objectID": "portfolio/export-to-xlsx/export-to-xlsx.html#libraries-and-styles-definition",
"href": "portfolio/export-to-xlsx/export-to-xlsx.html#libraries-and-styles-definition",
"title": "Beautiful Export to Excel (xlsx)",
"section": "Libraries and Styles Definition",
"text": "Libraries and Styles Definition\nUnlike writexl, openxlsx allows you to construct styles inside R and apply them cell-by-cell. First, we load the library and pre-define our color palette and structural styles (like bold headers and borders).\nlibrary(openxlsx)\n\n## --- Color Palette\ngray <- createStyle(fgFill = '#d9d9d9')\ngreen <- createStyle(fgFill = '#c6e0b4')\nred <- createStyle(fgFill = '#f9a1a1')\nblue <- createStyle(fgFill = '#bdd7ee')\nyellow <- createStyle(fgFill = '#ffe699')\nros <- createStyle(fgFill = '#fce4d6')\nborder <- createStyle(fgFill = '#333333')\n\n## --- Structural Styles\nst_bord <- createStyle(numFmt = \"#,##0\",\n border = 'TopBottomLeftRight', borderColour = '#cccccc')\nst_head <- createStyle(textDecoration = \"bold\", halign = \"center\",\n border = 'TopBottomLeftRight', borderColour = border)\nst_bot <- createStyle(textDecoration = 'bold', border = \"top\",\n borderColour = border, borderStyle = \"medium\")\nst_name <- createStyle(halign = \"center\", textDecoration = 'bold',\n fontSize = 16, border = 'TopBottomLeftRight',\n borderColour = border, borderStyle = 'medium')\nst_bold <- createStyle(textDecoration = 'bold', border = \"left\",\n borderColour = border, borderStyle = \"medium\")"
},
{
"objectID": "portfolio/export-to-xlsx/export-to-xlsx.html#creating-the-workbook-and-setting-options",
"href": "portfolio/export-to-xlsx/export-to-xlsx.html#creating-the-workbook-and-setting-options",
"title": "Beautiful Export to Excel (xlsx)",
"section": "Creating the Workbook and Setting Options",
"text": "Creating the Workbook and Setting Options\nEverything starts by initializing a Workbook object in RAM. This object acts as an in-memory representation of our final Excel file, where we will add sheets, inject data, and manipulate settings such as paper orientation and print margins.\n## 1. Initialize an empty workbook\nwb <- createWorkbook()\n\n## 2. Add a new worksheet named \"Sales\". Disabling gridLines makes it look like a clean canvas.\naddWorksheet(wb, sheetName = \"Sales\", orientation = 'portrait', gridLines = FALSE)\n\n## 3. Configure print layout margins (especially useful for automated physical reports)\npageSetup(wb, \"Sales\", left = 0.25, top = 0.25, right = 0.25, bottom = 0.25)"
},
{
"objectID": "portfolio/export-to-xlsx/export-to-xlsx.html#writing-data-and-applying-customization",
"href": "portfolio/export-to-xlsx/export-to-xlsx.html#writing-data-and-applying-customization",
"title": "Beautiful Export to Excel (xlsx)",
"section": "Writing Data and Applying Customization",
"text": "Writing Data and Applying Customization\nNow, we will structure the report sheet. A standard approach involves adding titles, merging cells across the width of the table, injecting the actual dataframe, and finally splashing our pre-defined styles onto the desired coordinate grids.\n## Write the Title to the first row of the sheet\nwriteData(wb, 'Sales', \"Sales\")\n\n## Merge the title cells across the entire length of our incoming data table\nmergeCells(wb, sheet = \"Sales\", cols = 1:length(fin_sales), rows = 1) \n\n## Apply the large bold style (st_name) to the newly merged title row\naddStyle(wb, 'Sales', st_name, 1, 1:length(fin_sales), stack = TRUE) \n\n## Inject the data frame starting at Row 3 (leaving Row 2 empty for spacing)\nwriteData(wb, 'Sales', fin_sales, startRow = 3)\n\n## Apply our bold header style starting exactly on Row 3\naddStyle(wb, 'Sales', st_head, 3, 1:length(fin_sales),\n gridExpand = TRUE, stack = TRUE)\nWait, we aren’t done. We still need to configure the column widths and paint specific highlighting on certain metric columns.\n## Widen the first column to comfortably fit descriptive labels\nsetColWidths(wb, \"Sales\", cols = 1, widths = 25) \n\n## Embellish the body of the table with standard inner borders\naddStyle(wb, 'Sales', st_bord, 4:(nrow(fin_sales)+3),\n 1:length(fin_sales), stack = TRUE, gridExpand = TRUE)\n\n## Add a sturdy bottom border to the totals sub-row\naddStyle(wb, 'Sales', st_bot, (which(!is.na(temp_sales$PROC))+3),\n 1:length(temp_sales), stack = TRUE, gridExpand = TRUE)\n\n## Add a final heavy bottom line at the absolute end of the table\naddStyle(wb, 'Sales', st_bot, nrow(fin_sales)+3, 1:length(fin_sales), stack = TRUE)\n\n## Center align the last numerical column\naddStyle(wb, 'Sales', createStyle(halign = \"center\"),\n 3:(nrow(fin_sales)+3), length(fin_sales), stack = TRUE, gridExpand = TRUE)\n\n## Ensure the last numerical column is wide enough\nsetColWidths(wb, \"Sales\", cols = length(fin_sales), widths = 18)\n\n## Colorizing Specific Header Cells for aesthetic pop\naddStyle(wb, 'Sales', gray, 3, 1, stack = TRUE)\naddStyle(wb, 'Sales', ros, 3, c(2,3), stack = TRUE, gridExpand = TRUE)\naddStyle(wb, 'Sales', green, 3, 4, stack = TRUE)\naddStyle(wb, 'Sales', blue, 3, 5, stack = TRUE)\naddStyle(wb, 'Sales', yellow, 3, 6, stack = TRUE)"
},
{
"objectID": "portfolio/export-to-xlsx/export-to-xlsx.html#result",
"href": "portfolio/export-to-xlsx/export-to-xlsx.html#result",
"title": "Beautiful Export to Excel (xlsx)",
"section": "Result",
"text": "Result\nIf all logic fires correctly, our R-side table is translated instantly into a stylized, production-ready Excel document."
},
{
"objectID": "portfolio/export-to-xlsx/export-to-xlsx.html#conclusion",
"href": "portfolio/export-to-xlsx/export-to-xlsx.html#conclusion",
"title": "Beautiful Export to Excel (xlsx)",
"section": "Conclusion",
"text": "Conclusion\nThis explicit indexing method of saving to .xlsx is “verbose”, but it provides absolute programmatic flexibility. It lets you format a report down to the granular level of individual cells dynamically.\nIf you are generating a specific corporate report regularly (where only the inbound numbers change), frontloading this effort ensures every sequential report output requires zero manual Excel touch-ups — saving tremendous amounts of time in iteration."
},
{
"objectID": "portfolio/todo-tree-viewer/todo-tree-viewer.html#why-this-matters",
"href": "portfolio/todo-tree-viewer/todo-tree-viewer.html#why-this-matters",
"title": "Interactive Todo.txt Tree Viewer",
"section": "Why This Matters",
"text": "Why This Matters\nWhen your task list grows into a multi-level hierarchy with dependencies, notes, and links, a plain text file becomes unwieldy. Sure, you could open it in an editor, search for link:, jump between files—but that’s exhausting. What you really want is to see the whole structure at once: who depends on whom, which tasks are connected, where the priorities lie.\nshow-links.py is an interactive TUI visualizer for todo.txt that displays tasks as a dependency tree, filters by status, context, and tags, and integrates with markdown notes. Think of it as a file manager for tasks—quick navigation through the hierarchy, seeing the big picture, and editing what you need with a single keystroke."
},
{
"objectID": "portfolio/todo-tree-viewer/todo-tree-viewer.html#data-architecture",
"href": "portfolio/todo-tree-viewer/todo-tree-viewer.html#data-architecture",
"title": "Interactive Todo.txt Tree Viewer",
"section": "Data Architecture",
"text": "Data Architecture\nThe foundation rests on two key structures: Task and RNote.\n\nTask — The Task Model\nEach line in todo.txt gets parsed into a Task object:\n\n\nshow-links.py\n\n@dataclass\nclass Task:\n line_num: int; raw: str; title: str\n done: bool = False\n priority: Optional[str] = None\n1 tid: Optional[str] = None\n2 status: Optional[str] = None\n3 link: Optional[str] = None\n4 ttype: Optional[str] = None\n5 tags: List[str] = field(default_factory=list)\n6 tags_lc: Set[str] = field(default_factory=set)\n7 ctx: Optional[str] = None\n8 due: Optional[str] = None\n filepath: Optional[Path] = None\n notes: List[RNote] = field(default_factory=list)\n\n\n1\n\nUnique task identifier (e.g., id:some_task)\n\n2\n\nCurrent status: run, hold, idea, todo\n\n3\n\nLink to parent task for dependency chain\n\n4\n\nTask type: dev, bug, feature, etc.\n\n5\n\nTags for categorization like +project, +urgent\n\n6\n\nLowercase cache for case-insensitive search\n\n7\n\nContext marker like @home, @work\n\n8\n\nDue date in ISO format 2024-03-15\n\n\nInteresting detail: tags are stored in two formats—both the original list and a lowercase cache. This enables case-insensitive search without repeated conversions.\nParsing happens via regex patterns that extract metadata directly from the task text:\n\n\nshow-links.py\n\n1_RE_PRIORITY = re.compile(r\"\\(([A-Z])\\)\")\n2_RE_TASK_ID = re.compile(r\"id:(\\S+)\")\n3_RE_LINK = re.compile(r\"link:(\\S+)\")\n4_RE_STATUS = re.compile(r\"st:(\\S+)\")\n5_RE_DUE = re.compile(r\"due:([\\d-]+)\")\n\n\n1\n\nCaptures priority in format (A), (B), (C)\n\n2\n\nExtracts unique task ID\n\n3\n\nFinds link to parent task\n\n4\n\nDetermines current status (run/hold/idea)\n\n5\n\nParses ISO date format for deadlines\n\n\n\n\nRNote — Note Hierarchy\nMarkdown notes are parsed into a tree structure based on heading levels:\n\n\nshow-links.py\n\n@dataclass\nclass RNote:\n title: str\n1 ntype: Optional[str] = None\n2 date: Optional[str] = None\n3 nid: Optional[str] = None\n4 link: Optional[str] = None\n level: int = 2\n line_num: int = 0\n filepath: Optional[Path] = None\n content: Optional[str] = None\n content_lines: List[Tuple[int, str]] = field(default_factory=list)\n5 children: List[\"RNote\"] = field(default_factory=list)\n\n\n1\n\nNote type: OBS (observation), HYP (hypothesis), DO (action)\n\n2\n\nDate the note was created\n\n3\n\nOptional note identifier for cross-referencing\n\n4\n\nLink to parent note for hierarchy\n\n5\n\nChild notes form the tree structure\n\n\nThe parser builds notes using a stack approach—a classic technique for building a tree from a flat list:\n\n\nshow-links.py\n\ndef parse_notes(content: str, filepath: Optional[Path] = None) -> List[RNote]:\n1 notes: List[RNote] = []; stack: List[RNote] = []\n \n for lineno, line in enumerate(content.split(\"\\n\"), 1):\n2 hm = _RE_HEADER.match(line)\n if hm:\n3 lv, raw = len(hm.group(1)), hm.group(2).strip()\n \n # Collapse stack to appropriate level\n4 while stack and stack[-1].level >= lv:\n stack.pop()\n \n note = RNote(\n title=t.strip(), level=lv,\n ntype=_ex(_RE_TYPE, raw),\n date=_ex(_RE_NDATE, raw),\n )\n \n5 if stack:\n stack[-1].children.append(note)\n6 else:\n notes.append(note)\n \n7 stack.append(note)\n\n\n1\n\nInitialize empty notes list and stack for hierarchy\n\n2\n\nFind markdown headings (##, ###, etc.)\n\n3\n\nCount # symbols to determine nesting level\n\n4\n\nPop stack until we reach appropriate parent level\n\n5\n\nIf stack exists, add as child to current parent\n\n6\n\nOtherwise, this is a root-level note\n\n7\n\nPush current note onto stack for next iteration\n\n\nThe stack maintains the “path” from root to current element, automatically handling arbitrary nesting depths.\n\n\nDependency Graph\nAfter parsing all tasks and notes, a dependency graph is built via link: fields:\n\n\nshow-links.py\n\ndef build_graph(tasks: List[Task]) -> tuple:\n1 id2t = {t.tid: t for t in tasks if t.tid}\n2 c2p = {} # child_id -> parent_id\n \n for t in tasks:\n3 if t.link and t.link in id2t:\n c2p[t.tid] = t.link\n \n return id2t, c2p\n\n\n1\n\nBuild lookup table: task ID → Task object\n\n2\n\nBuild parent mapping: child ID → parent ID\n\n3\n\nOnly add link if parent task actually exists\n\n\nNow you can build the task tree: if a task has link:parent_task, it becomes a child of parent_task."
},
{
"objectID": "portfolio/todo-tree-viewer/todo-tree-viewer.html#visualization-two-modes",
"href": "portfolio/todo-tree-viewer/todo-tree-viewer.html#visualization-two-modes",
"title": "Interactive Todo.txt Tree Viewer",
"section": "Visualization: Two Modes",
"text": "Visualization: Two Modes\nThe script supports two display modes—Tree (hierarchical) and Day (by date).\n\nTree Mode\n\nTree view is built recursively:\n\n\nshow-links.py\n\ndef build_tree(tasks: List[Task], id2t, c2p, ...) -> List[Node]:\n # Find root tasks (those without parents)\n1 roots = [t for t in tasks if not t.tid or t.tid not in c2p]\n \n2 def make_tree(task: Task) -> Node:\n3 children = [id2t[cid] for cid, pid in c2p.items() if pid == task.tid]\n return Node(\n data=task,\n4 children=[make_tree(c) for c in children],\n notes=[make_tree_note(n) for n in task.notes]\n )\n \n5 return [make_tree(r) for r in roots]\n\n\n1\n\nRoot tasks have no parent (not in child→parent mapping)\n\n2\n\nRecursive function to build tree from task\n\n3\n\nFind all tasks that link to current task\n\n4\n\nRecursively build child nodes\n\n5\n\nBuild forest (multiple root trees)\n\n\nThe tree is displayed with indentation and symbols:\n├─ (A) Write article st:run @home due:15.03\n│ ├─ Gather references st:todo\n│ └─ Prepare code st:hold\n└─ Submit PR st:idea\n\n\nDay Mode\n\nIn Day mode, tasks are grouped by date and status:\n\n\nshow-links.py\n\ndef build_day_view(tasks: List[Task]) -> List[DayGroup]:\n # Group by date: overdue / today / future\n1 overdue = [t for t in tasks if is_overdue(t.due)]\n2 today = [t for t in tasks if is_today(t.due)]\n3 future = [t for t in tasks if is_future(t.due)]\n \n # Within groups, sort by priority and status\n return [\n DayGroup(\"Overdue\", overdue),\n DayGroup(\"Today\", today),\n DayGroup(\"Upcoming\", future)\n ]\n\n\n1\n\nTasks past their due date\n\n2\n\nTasks due today\n\n3\n\nTasks due in the future\n\n\nThis is useful for planning—you can see what’s burning, what needs attention today, and what’s coming up."
},
{
"objectID": "portfolio/todo-tree-viewer/todo-tree-viewer.html#color-palette-and-rich",
"href": "portfolio/todo-tree-viewer/todo-tree-viewer.html#color-palette-and-rich",
"title": "Interactive Todo.txt Tree Viewer",
"section": "Color Palette and Rich",
"text": "Color Palette and Rich\nAll output is implemented via the Rich library, which enables beautiful terminal UI.\n\nColor Scheme\nColors are centrally defined:\n\n\nshow-links.py\n\nC = dict(\n1 red=\"#d78787\", blue=\"#87afd7\", yellow=\"#d7af87\", green=\"#87af87\",\n mag=\"#af87af\", cyan=\"#7aafaf\", gray=\"#8a8a8a\", dim=\"#585858\",\n orange=\"#af875f\", white=\"#c6c6c6\", sep=\"#818181\",\n)\n\n2PRIO = {\"A\": C[\"red\"], \"B\": C[\"blue\"], \"C\": C[\"sep\"]}\n3STAT = {\"idea\": C[\"yellow\"], \"todo\": C[\"dim\"], \"run\": C[\"blue\"],\n \"hold\": C[\"orange\"], \"lock\": C[\"red\"]}\n\n\n1\n\nBase color palette inspired by terminal themes\n\n2\n\nPriority levels: A (high/red), B (medium/blue), C (low/gray)\n\n3\n\nStatus colors: idea (yellow), active (blue), blocked (red)\n\n\nEach priority, status, and note type gets its own color for instant visual distinction.\n\n\nRendering with Tag Highlighting\nTags in task titles are highlighted separately:\n\n\nshow-links.py\n\ndef _title_text(title: str, text_col: str, tag_col: str) -> Text:\n tx = Text(no_wrap=False)\n1 for part in _RE_TAG_PAT.split(title):\n2 tx.append(part, style=tag_col if _RE_TAG_PAT.match(part) else text_col)\n return tx\n\n\n1\n\nSplit title on tag boundaries (e.g., +urgent)\n\n2\n\nApply tag color to tags, text color to everything else\n\n\nIf the title contains +urgent, it will be highlighted in the tag color while the rest uses the task’s main color.\n\n\nMarkdown in Notes\nNotes support basic markdown formatting—bold, italic, and code:\n\n\nshow-links.py\n\ndef _md_text(raw: str, base: str) -> Text:\n tx = Text(no_wrap=False); pos = 0\n1 for m in _RE_MD.finditer(raw):\n if m.start() > pos:\n tx.append(raw[pos:m.start()], style=base)\n2 if m.group(1) is not None:\n tx.append(m.group(1), style=f\"{base} bold\")\n3 elif m.group(2) is not None:\n tx.append(m.group(2), style=f\"{C['dim']} italic\")\n4 else:\n tx.append(m.group(3), style=f\"{C['dim']} on #303030\")\n pos = m.end()\n\n\n1\n\nFind markdown patterns: **text**, *text*, `text`\n\n2\n\nDouble asterisks = bold\n\n3\n\nSingle asterisks = italic\n\n4\n\nBackticks = code with dark background\n\n\nThis allows readable notes right in the terminal."
},
{
"objectID": "portfolio/todo-tree-viewer/todo-tree-viewer.html#navigation-and-control",
"href": "portfolio/todo-tree-viewer/todo-tree-viewer.html#navigation-and-control",
"title": "Interactive Todo.txt Tree Viewer",
"section": "Navigation and Control",
"text": "Navigation and Control\nThe entire interface is keyboard-driven, making it fast for power users.\n\nKeyboard Shortcuts\n↑ ↓ navigation\n→ / space expand / collapse node\n← collapse / go to parent\nv switch view (Tree ↔ Day)\nf search (text) / goto branch (digits)\nl n d c toggles (linked/nonotes/done/content)\ns @ + filters (status/context/tags)\nenter open in $EDITOR\nctrl+enter open task note\nr build branch from line\nesc reset filters\nq quit\n\n\nKey Reading\nKey reading is implemented via low-level termios:\n\n\nshow-links.py\n\ndef read_key() -> str:\n fd = sys.stdin.fileno()\n1 old_settings = termios.tcgetattr(fd)\n try:\n2 tty.setraw(fd)\n \n3 if select.select([sys.stdin], [], [], 0.1)[0]:\n ch = sys.stdin.read(1)\n \n # Recognize escape sequences\n4 if ch == \"\\x1b\":\n if select.select([sys.stdin], [], [], 0.1)[0]:\n ch2 = sys.stdin.read(1)\n if ch2 == \"[\":\n ch3 = sys.stdin.read(1)\n return {\"A\": \"up\", \"B\": \"down\", \n \"C\": \"right\", \"D\": \"left\"}[ch3]\n \n return {\"q\": \"q\", \"\\r\": \"enter\", \"\\x7f\": \"backspace\"}[ch]\n finally:\n5 termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)\n\n\n1\n\nSave current terminal settings\n\n2\n\nEnter raw mode (no line buffering)\n\n3\n\nNon-blocking check for input with 0.1s timeout\n\n4\n\nEscape sequences for arrow keys\n\n5\n\nRestore original terminal settings\n\n\nThis captures arrows, Enter, Backspace, and special combinations like Ctrl+Enter."
},
{
"objectID": "portfolio/todo-tree-viewer/todo-tree-viewer.html#filtering-and-search",
"href": "portfolio/todo-tree-viewer/todo-tree-viewer.html#filtering-and-search",
"title": "Interactive Todo.txt Tree Viewer",
"section": "Filtering and Search",
"text": "Filtering and Search\nOne of the most powerful features is the filtering system.\n\nText Search\nSearch works in real-time—the tree rebuilds with each character:\n\n\nshow-links.py\n\ndef _apply_search(st: St) -> None:\n buf = st.input_buf\n \n # If only digits entered — goto task by line number\n1 if buf.isdigit() and buf:\n target = st.ln2t.get(int(buf))\n st.root_tid = target.tid if target else \"\"\n st.flt_search = \"\"\n2 else:\n # Text search in titles\n st.flt_search = buf\n \n3 do_rebuild(st)\n\n\n1\n\nNumeric input = jump to line number\n\n2\n\nText input = filter by title content\n\n3\n\nRebuild entire tree with new filter\n\n\n\n\nBuilding Branch from Line\n“Goto” mode lets you enter a line number and build the tree from that task:\n\n\nshow-links.py\n\nif st.input_buf.isdigit():\n target = st.ln2t.get(int(st.input_buf))\n if target:\n # Find branch root (walk up parents)\n root = target\n1 seen = {target.tid}\n while root.tid in st.c2p:\n pid = st.c2p[root.tid]\n2 if pid in seen:\n break # cycle detected!\n seen.add(pid)\n root = st.id2t[pid]\n \n st.root_tid = root.tid\n do_rebuild(st)\n\n\n1\n\nTrack visited task IDs\n\n2\n\nBreak if we encounter a cycle (A → B → C → A)\n\n\nThis is very convenient for focusing on a specific subtask and all its dependencies."
},
{
"objectID": "portfolio/todo-tree-viewer/todo-tree-viewer.html#interesting-technical-details",
"href": "portfolio/todo-tree-viewer/todo-tree-viewer.html#interesting-technical-details",
"title": "Interactive Todo.txt Tree Viewer",
"section": "Interesting Technical Details",
"text": "Interesting Technical Details\n\nTree Flattening for Display\nThe tree structure is rendered through “flattening”—converting the tree into a flat list with parent indices:\n\n\nshow-links.py\n\n@dataclass\nclass FlatItem:\n node: Node\n1 depth: int\n2 parent_idx: int\n3 index: int\n\ndef flatten(roots: List[Node]) -> List[FlatItem]:\n flat: List[FlatItem] = []\n \n def walk(node: Node, depth: int, parent_idx: int):\n idx = len(flat)\n flat.append(FlatItem(node, depth, parent_idx, idx))\n \n4 if node.expanded:\n for child in node.children:\n walk(child, depth + 1, idx)\n \n for root in roots:\n5 walk(root, 0, -1)\n \n return flat\n\n\n1\n\nIndentation level for rendering\n\n2\n\nIndex of parent in flat list (-1 for roots)\n\n3\n\nOwn index in flat list\n\n4\n\nOnly recurse into expanded nodes\n\n5\n\nRoots have depth 0, no parent\n\n\nThis enables: - List display with scrolling (cursor = index in flat) - Quick parent lookup via parent_idx - Easy node expand/collapse\n\n\nHandling ANSI Codes\nRich generates ANSI escape codes for colors. When calculating string widths, these need to be ignored:\n\n\nshow-links.py\n\n1_RE_ANSI = re.compile(r\"\\x1b\\[[0-9;]*m\")\n\ndef _vlen(s: str) -> int:\n \"\"\"Visual length of string (excluding ANSI codes)\"\"\"\n2 return len(_RE_ANSI.sub(\"\", s))\n\n\n1\n\nPattern for ANSI escape sequences\n\n2\n\nStrip ANSI codes before counting characters\n\n\nWithout this, colored strings would be counted as longer than they appear in the terminal."
},
{
"objectID": "portfolio/todo-tree-viewer/todo-tree-viewer.html#conclusions",
"href": "portfolio/todo-tree-viewer/todo-tree-viewer.html#conclusions",
"title": "Interactive Todo.txt Tree Viewer",
"section": "Conclusions",
"text": "Conclusions\nshow-links.py transforms flat todo.txt into an interactive dependency graph with fast navigation and powerful filtering. It’s a great example of how you can take a simple text format and add rich visualization without abandoning its advantages (plain text, version control, grep-friendly).\nKey features: - Tree visualization — task relationships are immediately visible - Two display modes — Tree for structure, Day for planning - Powerful filtering — by status, context, tags, text search - Editor integration — one click to edit task or note - Color coding — priorities and statuses are instantly distinguishable - Fast navigation — all keyboard-driven, no mouse needed\nInteresting patterns demonstrated: - Building a tree from a flat list via stack - Tree flattening for scrollable display - Low-level key reading via termios - Rendering accounting for ANSI escape codes - Cycle protection in dependency graphs\nThis approach can be applied to any data with tree structure—git commits, directory structures, mind maps, dependency graphs."
},
{
"objectID": "portfolio/todo-tree-viewer/todo-tree-viewer.html#full-source-code",
"href": "portfolio/todo-tree-viewer/todo-tree-viewer.html#full-source-code",
"title": "Interactive Todo.txt Tree Viewer",
"section": "Full Source Code",
"text": "Full Source Code\nThe complete source code (1235 lines) is available for download:\n\n\n\n\n\n\nTip📥 Download\n\n\n\nshow-links.py — Full Python script (1235 lines)\nPlace this file alongside your todo.txt in ~/Documents/todo/ and run:\npython show-links.py"
},
{
"objectID": "portfolio/cheese-sales/cheese-sales.html",
"href": "portfolio/cheese-sales/cheese-sales.html",
"title": "Cheese Sales Analysis",
"section": "",
"text": "This post demonstrates how to create an attractive 3D pie chart in R using the plotrix package. While pie charts are often debated in data visualization circles, they can be highly effective for displaying market share or proportional breakdowns when stylized well."
},
{
"objectID": "portfolio/cheese-sales/cheese-sales.html#data-preparation",
"href": "portfolio/cheese-sales/cheese-sales.html#data-preparation",
"title": "Cheese Sales Analysis",
"section": "Data Preparation",
"text": "Data Preparation\nFirst, we will load the necessary libraries and create a sample dataset of cheese sales. We use dplyr to elegantly calculate the percentage share (market share) of each cheese type relative to total sales.\nlibrary(plotrix)\nlibrary(dplyr)\n\n# Constructing a sample dataset of cheese sales\ndf <- tibble(\n NM = c(\"Cheddar\", \"Mozzarella\", \"Brie\", \"Parmesan\", \"Gouda\"),\n SUM = c(279158, 231399, 606614, 586469, 267434)\n) %>% \n # Calculating the percentage share for each category\n mutate(\n PROC = round((SUM / sum(SUM)) * 100, 1) %>% paste0('%')\n )"
},
{
"objectID": "portfolio/cheese-sales/cheese-sales.html#creating-the-3d-pie-chart",
"href": "portfolio/cheese-sales/cheese-sales.html#creating-the-3d-pie-chart",
"title": "Cheese Sales Analysis",
"section": "Creating the 3D Pie Chart",
"text": "Creating the 3D Pie Chart\nThe plotrix library provides the pie3D function, which allows for extensive customization. In this example, we manipulate the margins, apply a built-in heat color palette, and format the labels to include both the proportional share and the absolute sales numbers (in thousands).\n# Generating a stylized 3D pie chart\npie3D(\n df$SUM, \n mar = rep(1, 4), # Adjusted margins to fit labels\n col = hcl.colors(length(df$NM), \"Heat 2\"), # Applying a warm color palette\n labels = paste0(df$NM, '\\n', round(df$SUM / 1000), 'k (', df$PROC, ')'),\n main = \"Cheese Sales Distribution\",\n height = 0.1, # Thickness of the 3D pie\n radius = 0.8, # Overall radius\n labelcex = 1, # Label size scaling factor\n explode = 0.1 # Spacing between the pie slices\n)"
},
{
"objectID": "portfolio/cheese-sales/cheese-sales.html#result",
"href": "portfolio/cheese-sales/cheese-sales.html#result",
"title": "Cheese Sales Analysis",
"section": "Result",
"text": "Result\nThe resulting visualization cleanly separates the categories and provides an immediate understanding of the sales distribution among the different cheese varieties."
},
{
"objectID": "portfolio/ocr-tool/ocr-tool.html#overview",
"href": "portfolio/ocr-tool/ocr-tool.html#overview",
"title": "OCR Reader",
"section": "Overview",
"text": "Overview\nThis post demonstrates an Optical Character Recognition (OCR) script written in Python. It leverages EasyOCR, a powerful library that uses deep learning to extract text from images. By integrating clipboard operations using xclip in a Linux Wayland environment, this script allows you to rapidly capture a screenshot (e.g., using a keyboard shortcut), extract the embedded text, and immediately paste the result wherever needed.\n\n\n\n\n\n\nNote\n\n\n\nNote: This demonstration relies on Wayland and xclip on Linux. If you use macOS or Windows, you will need to replace the clipboard commands with tools native to your OS (like pbpaste/pbcopy or standard Python clipboard libraries)."
},
{
"objectID": "portfolio/ocr-tool/ocr-tool.html#dependencies",
"href": "portfolio/ocr-tool/ocr-tool.html#dependencies",
"title": "OCR Reader",
"section": "Dependencies",
"text": "Dependencies\nThe script relies on several key packages:\n\neasyocr to interpret the text from the image.\nPIL.Image and io.BytesIO to construct the image object from raw clipboard bytes.\nsubprocess to trigger system commands like xclip.\nnumpy because EasyOCR expects image data formatted as a NumPy array.\n\nimport easyocr\nfrom PIL import Image\nimport io\nimport subprocess\nimport numpy as np\n\n# Initialize the EasyOCR reader. \n# Here, we configure it to detect Norwegian ('no') and English ('en') text.\nreader = easyocr.Reader(['no', 'en'])"
},
{
"objectID": "portfolio/ocr-tool/ocr-tool.html#system-operations-clipboard-handling",
"href": "portfolio/ocr-tool/ocr-tool.html#system-operations-clipboard-handling",
"title": "OCR Reader",
"section": "System Operations (Clipboard Handling)",
"text": "System Operations (Clipboard Handling)\nInstead of saving images to the disk, the script fetches the image data directly from the system clipboard.\n\nRetrieving an Image from the Clipboard\nWe invoke xclip as a subprocess to pull the image format from the current clipboard selection into a byte stream.\ndef get_image_from_clipboard():\n # Attempt to request the clipboard contents targeted as an image/png\n result = subprocess.run(\n ['xclip', '-selection', 'clipboard', '-t', 'image/png', '-o'], \n stdout=subprocess.PIPE\n ) \n\n # If xclip fails (e.g., if there's text or nothing in the clipboard instead of an image), raise an error\n if result.returncode != 0: \n raise Exception(\"Could not retrieve an image from the clipboard.\") \n\n # Return the raw binary data wrapped in a BytesIO object for PIL to consume\n return io.BytesIO(result.stdout)\n\n\nStoring OCR Text back into the Clipboard\nOnce EasyOCR yields text, we use subprocess again to insert the structured string back into the clipboard, making it ready to be pasted.\ndef copy_text_to_clipboard(text):\n # Pass the recognized text as encoded bytes to xclip's standard input\n subprocess.run(\n ['xclip', '-selection', 'clipboard'], \n input=text.encode(), \n check=True\n )"
},
{
"objectID": "portfolio/ocr-tool/ocr-tool.html#the-main-processing-flow",
"href": "portfolio/ocr-tool/ocr-tool.html#the-main-processing-flow",
"title": "OCR Reader",
"section": "The Main Processing Flow",
"text": "The Main Processing Flow\nThe core logic ties together the clipboard fetch, the reading mechanism, and the clipboard write. You can bind this script to a global system shortcut to trigger it instantaneously after capturing a screen snippet.\ntry:\n # 1. Fetch the image data stream and open it as a PIL Image\n image = Image.open(get_image_from_clipboard()) \n \n # 2. EasyOCR expects NumPy arrays, so we convert the PIL Image accordingly\n result = reader.readtext(np.array(image)) \n \n # 3. EasyOCR returns a list of tuples containing bounding boxes, texts, and confidence scores.\n # We extract just the text strings and join them with newlines.\n recognized_text = \"\\n\".join([text for (_, text, _) in result]) \n \n # 4. Push the final joined text string to the system clipboard\n copy_text_to_clipboard(recognized_text) \n\n print('OCR extraction successful. Text is in your clipboard.')\n\nexcept Exception as e:\n print(f\"Error encountered during OCR: {e}\")"
},
{
"objectID": "portfolio/sales-binol-olive/sales-binol-olive.html",
"href": "portfolio/sales-binol-olive/sales-binol-olive.html",
"title": "Binol vs Olive Sales Analysis",
"section": "",
"text": "Binol and Olive are competing businesses in the flower market. This analysis compares their historical sales across several shared shop locations to identify trends and market dominance in different areas."
},
{
"objectID": "portfolio/sales-binol-olive/sales-binol-olive.html#setup-and-libraries",
"href": "portfolio/sales-binol-olive/sales-binol-olive.html#setup-and-libraries",
"title": "Binol vs Olive Sales Analysis",
"section": "Setup and Libraries",
"text": "Setup and Libraries\nWe use a modern, highly performant Python data stack for this analysis: DuckDB for querying the local SQLite database rapidly, Polars for fast data manipulation, and Seaborn combined with Matplotlib for creating rich visualizations.\n\nfrom sqlalchemy import create_engine\nimport duckdb as db\nimport polars as pl\n\nfrom matplotlib.ticker import FuncFormatter\nimport matplotlib.dates as mdates\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom locale import setlocale, LC_TIME"
},
{
"objectID": "portfolio/sales-binol-olive/sales-binol-olive.html#data-extraction",
"href": "portfolio/sales-binol-olive/sales-binol-olive.html#data-extraction",
"title": "Binol vs Olive Sales Analysis",
"section": "Data Extraction",
"text": "Data Extraction\nWe connect directly to a local database using DuckDB’s powerful integration. We filter for specific competitive shop locations and extract the dataset into a Polars DataFrame. Finally, we convert the date strings into proper Date objects for time-series plotting.\n\n# Connect to the local database\ncon = db.connect(\"~/Documents/kdb/notes/local.db\")\n\n# Query the sales data and convert directly to a Polars DataFrame\ndf = con.execute(\"\"\"\n SELECT * FROM portfolio.salles_binol_olivia\n WHERE Shops IN ('Washington 24', 'Pine 3', 'Cedar 19a', 'Park Avenue 12', 'King Street 25', 'Victoria 38/4')\n\"\"\").pl().with_columns(\n # Convert string dates to datetime objects\n pl.col('DT').str.to_date(format='%Y-%m-%d')\n)"
},
{
"objectID": "portfolio/sales-binol-olive/sales-binol-olive.html#data-aggregation-and-visualization",
"href": "portfolio/sales-binol-olive/sales-binol-olive.html#data-aggregation-and-visualization",
"title": "Binol vs Olive Sales Analysis",
"section": "Data Aggregation and Visualization",
"text": "Data Aggregation and Visualization\nTo visualize the sales side-by-side effectively, we need to transform our data from wide format (Binol and Olive as separate columns) to a long format (or melted) structure.\nUsing Seaborn’s FacetGrid, we can cleanly create small multiples—one plot for each shop—allowing for a direct visual comparison of the performance of the two companies at each location. We also add trend lines and value annotations to make the charts easy to read.\n\n# Set the default Seaborn aesthetic theme\nsns.set_theme()\n\n# Melt the DataFrame so 'Product' becomes a categorical column linking 'Binol' and 'Olive'\ndf_melted = df.melt(\n id_vars=[\"DT\", \"Shops\"], \n value_vars=[\"Binol\", \"Olive\"],\n variable_name=\"Product\", \n value_name=\"Value\"\n)\n\n# Initialize a grid of plots, faceted by 'Shops'\ng = sns.FacetGrid(\n df_melted,\n col='Shops',\n hue='Product', # Color grouping\n col_wrap=3, # Display 3 plots per row\n sharey=False, # Allow y-axis to scale independently for each shop\n sharex=False, # Allow x-axis to scale independently\n height=4 # Height of each individual facet\n)\n\n# Map a lineplot to each facet\ng = g.map(sns.lineplot, 'DT', 'Value', marker='o', markersize=4)\n\n# Apply custom formatting and annotations to each subplot\nfor ax in g.axes.flat:\n # Increase the maximum Y-axis value by 10% to prevent labels from getting cut off at the top\n ylim = ax.get_ylim()\n ax.set_ylim((ylim[0], ylim[1] * 1.1))\n\n # Draw dashed trend lines for each plotted line\n for line in ax.lines:\n sns.regplot(\n x=line.get_xdata(), y=line.get_ydata(), ax=ax,\n scatter=False, # Hide scatter points, show only the trend line\n color='gray', \n ci=None, # Disable confidence interval shading\n line_kws={'linestyle': '--'} \n )\n # Format X-axis tick labels to show abbreviated month names\n ax.xaxis.set_major_formatter(mdates.DateFormatter('%b'))\n\n# Define a function to annotate the data points with their exact values\ndef annotate_points(x, y, **kwargs):\n ax = plt.gca() \n for i in range(len(x)):\n ax.annotate(\n f\"{y.values[i]:.1f}\", \n xy=(x.values[i], y.values[i]), \n fontsize=8,\n xytext=(0, 10), textcoords=\"offset points\",\n color=\"black\", \n bbox=dict(boxstyle=\"round\", ec=\"none\", fc=\"gray\", alpha=0.3, pad=0.3),\n va=\"center\", ha=\"center\"\n )\n\n# Map the annotation function to the grid\ng.map(annotate_points, 'DT', 'Value')\n\n# Adjust layout to make room for the main title and legend\ng.fig.subplots_adjust(top=.9) \ng.fig.suptitle('Monthly Sales Comparison by Location') \n\n# Configure the unified legend\ng.add_legend(title=\"Company\")\nsns.move_legend(\n g, \"center\", \n bbox_to_anchor=(.5, 1), \n ncol=5, title=None, frameon=False,\n)\n\n# Set axis labels across all facets\ng.set_axis_labels(\"Date\", \"Sales Amount\")\n\n# Set the title of each subplot based on the shop name\ng.set_titles(\"{col_name}\")\n\n# Render the plot\nplt.show()"
},
{
"objectID": "portfolio/sales-binol-olive/sales-binol-olive.html#conclusion",
"href": "portfolio/sales-binol-olive/sales-binol-olive.html#conclusion",
"title": "Binol vs Olive Sales Analysis",
"section": "Conclusion",
"text": "Conclusion\nBy observing the trend lines, we can immediately identify which storefront location is dominated by Binol and which by Olive over time. Utilizing Polars and DuckDB ensures that even as the dataset scales, this aggregation and querying structure will remain extremely fast and memory-efficient."
},
{
"objectID": "job/it-services-pricing.html",
"href": "job/it-services-pricing.html",
"title": "IT-Konsulent Vladimir",
"section": "",
"text": "Treg PC? Virus? Ny maskin som skal settes opp?\n Jeg tilbyr pålitelig, lokal IT-hjelp for privatpersoner og småbedrifter i Bergen og Os.\n Alltid fast pris - ingen overraskelser.\n \n \n Ytelse & LinuxSpesialisert på Windows- og Linux-optimalisering\n Sikkerhet & personvernBackup, virusfjerning og personvernhjelp\n Privat & ENK/ASTeknisk støtte og driftsoppsett\n \n\n\n\n\n \n \n \n Om meg\n \n \n \n \n \n \n Jeg heter Vladimir, bor i Bjørnafjorden og tilbyr IT-hjelp til privatpersoner i Bergen og Os\n Jeg har bachelor i informatikk og jobbet som teknisk spesialist med IT-støtte, programvareinstallasjon og systemkonfigurasjon for Windows og Linux. De siste fire månedene hjelper jeg eldre med IT-spørsmål på et lokalt eldresenter — det har gitt meg erfaring med å forklare tekniske ting på en tydelig og tålmodig måte\n Linux har vært mitt daglige system i fire år, og jeg jobber med programmeringsspråkene Python, R og SQL. Jeg setter ærlighet og presisjon høyt — du får alltid en klar vurdering før noe arbeid påbegynnes\n Les mer om meg\n \n\n\n\n\n \n \n Innledende vurdering\n \n\n \n \n Gratis Førstekonsultasjon\n Gratis+ Legg til\n \n \n Usikker på hva du trenger? Vi snakker om problemet og finner riktig løsning\n \n \n Hva inngår (avhenger av situasjonen)\n \n Samtale om dine behov og forventninger\n Ved ytelsesproblemer: systemlogg, oppstartstid, S.M.A.R.T.\n Ved maskinvarebytte: informasjon om prosessen\n Ved nyoppsett: kartlegging av programbehov\n Anbefaling av riktig tjeneste og estimert pris\n \n \n \n Praktisk info\n \n Tidsbruk: 10–30 min\n Du bestemmer selv om du vil gå videre\n Kan gjøres via fjerntilkobling eller på stedet\n Besøk hos deg faktureres: 350,-\n \n \n \n \n \n\n\n\n\n \n \n Maskinvare & Nettverk\n \n\n \n \n Total Servicepakke Beste verdi\n 1 600,-1 250,-−22%+ Legg til\n \n \n Kombinerer full fysisk rengjøring med programvareoptimalisering — alt i ett besøk til en samlet pris\n \n \n Fysisk vedlikehold — 1100,-\n \n Åpner kabinettet og fjerner all støvansamling\n Renser CPU-kjøler, GPU-kjøler og alle vifter\n Fjerner gammel kjølepasta fra CPU og GPU\n Påfører ny kjølepasta på CPU og GPU Arctic MX-7\n Kontroll av GPU-temperatur før og etter\n \n \n \n Programvare — 500,-\n \n Fjerning av bloatware og unødvendige programmer\n Optimalisering av oppstartssekvensen\n Rensing av midlertidige filer\n \n \n \n Inngår ikke\n \n Maskinvarebytte inngår ikke\n Reinstallasjon av Windows inngår ikke\n \n \n \n Praktisk info\n \n Tidsbruk: 2 – 3 timer\n Dine filer og innstillinger beholdes\n \n \n \n \n \n\n \n\n \n \n Bytte av PC-deler\n 700,-+ Legg til\n \n \n Montering av deler du selv har kjøpt. Jeg installerer, konfigurerer og tester — du sørger for riktige komponenter\n \n Hva inngår\n \n Grafikkort (GPU)\n RAM-minnepinner\n Prosessor (CPU) og kjøler\n Kabinettsvifter og strømforsyning (PSU)\n \n \n \n Praktisk info\n \n Tidsbruk: 1 – 2 timer\n Komponentvalg og bestilling inngår ikke\n \n \n \n \n\n \n \n Kloning av harddisk / SSD\n 850,-+ Legg til\n \n \n Kopiering av alt til ny disk. Ingen reinstallasjon nødvendig\n \n Hva inngår\n \n Sektor-for-sektor kopi av eksisterende disk\n Montering av ny disk i maskinen\n Verifisering av at systemet starter korrekt\n \n \n \n Praktisk info\n \n Tidsbruk: 1 – 2 timer\n Ny disk / SSD inngår ikke\n \n \n \n \n\n \n \n Bytte av kjølepasta CPU\n 450,-+ Legg til\n \n \n Ny termopasta på prosessoren gir raskere varmeavledning, lavere temperaturer og stabilere drift\n \n Hva inngår\n \n Demontering av CPU-kjøler\n Fjerning av gammel pasta og påføring av ny Arctic MX-7\n Temperaturmåling før og etter\n \n \n \n Praktisk info\n \n Tidsbruk: 30 – 60 min\n \n \n \n \n\n \n \n Bytte av kjølepasta GPU\n 650,-+ Legg til\n \n \n Ny termopasta på grafikkortet reduserer temperaturen under belastning, noe som gir bedre ytelse og lengre levetid\n \n Hva inngår\n \n Demontering og åpning av GPU-kjøler\n Ny pasta på GPU-chip Arctic MX-7\n Kontroll av kjølepads (bytte anbefales — tillegg)\n Temperaturmåling under last\n \n \n \n Praktisk info\n \n Tidsbruk: 1 – 2 timer\n \n \n \n \n\n \n \n Nettverk & Wi-Fi\n 550,-+ Legg til\n \n \n Oppsett av ruter, nettverk og tilkobling av alle enheter med stabilt signal\n \n Hva inngår\n \n Tilkobling og oppsett av ruter\n Navngivning og sterkt passord\n Tilkobling av alle enheter\n \n \n \n Praktisk info\n \n Tidsbruk: 30 – 60 min\n VPN / mesh-nettverk inngår ikke — avtales separat\n \n \n \n \n\n \n\n\n\n\n \n \n Programvare & Sikkerhet\n \n \n\n \n \n Fjerning av virus / malware\n 750,-+ Legg til\n \n \n Finner og fjerner all skadelig programvare. Inkluderer etterkontroll og råd om fremtidig beskyttelse\n \n Hva inngår\n \n Scanning med flere antivirusverktøy\n Manuell gjennomgang av oppstartsprogrammer\n Fjerning av adware, spyware og trojanske hester\n Rensing av nettleserutvidelser\n Råd om gratis antivirusløsning fremover\n \n \n \n Praktisk info\n \n Tidsbruk: 1 – 2 timer\n Garanti: 14 dager\n Ransomware-gjenoppretting inngår ikke\n \n \n \n \n\n \n \n Ytelsesoptimalisering\n 1 400,-+ Legg til\n \n \n Grundig systemtrimming — maskinen konfigureres for maksimal ytelse. Du merker forskjellen umiddelbart\n \n \n Hva inngår\n \n Fjerning av alle unødvendige programmer\n Optimalisering av Windows-tjenester og oppstart\n Oppdatering av alle drivere\n Defragmentering (HDD) eller TRIM (SSD)\n \n \n \n Praktisk info\n \n Tidsbruk: 2 – 3 timer\n Dine filer og programmer beholdes\n Garanti: 30 dager\n Reinstallasjon av operativsystemet er ikke inkludert\n \n \n \n \n \n\n \n \n Installasjon av programmer\n 450,-+ Legg til\n \n \n Installasjon og konfigurasjon av vanlig bruker- og kontorprogramvare på Windows eller Linux. Gjelder ikke serveroppsett, databaser eller bedriftssystemer\n \n Hva inngår\n \n Installasjon og konfigurasjon av ønskede programmer\n Bruker- og kontorprogramvare på Windows eller Linux\n Verifisering av at alt fungerer etter installasjon\n \n \n \n Praktisk info\n \n Kan gjøres via fjerntilkobling\n Serveroppsett, databaser og bedriftssystemer inngår ikke\n \n \n \n \n\n \n\n\n\n\n \n \n Operativsystem & Oppsett\n \n \n\n \n \n Windows Nyinstallasjon\n 750,-+ Legg til\n \n \n Ren Windows uten reklame-apper, full driveroppdatering. Windows-lisens aktiveres automatisk — ha Microsoft-kontoen klar\n \n Hva inngår\n \n Fullstendig sletting av eksisterende Windows\n Installasjon av nyeste Windows 10 eller 11 (ren ISO)\n Installasjon av alle nødvendige drivere\n Deaktivering av unødvendige tjenester og telemetri\n \n \n \n Praktisk info\n \n Tidsbruk: 1 – 2 timer\n Backup av filer inngår ikke\n \n \n OBS: All data slettes. Trenger du å bevare filer, se alternativet nedenfor\n \n Med backup 950,-\n Backup av alle filer først, deretter ren installasjon — du mister ingenting. USB-pinne kreves (ikke inkludert, ny pinne +90,-)\n \n \n \n\n \n \n Full Linux-migrasjon\n 1 500,-+ Legg til\n \n \n Bytt ut Windows én gang for alle. Inkluderer distribusjonsvalg, installasjon og 45 min intro-kurs\n \n Hva inngår\n \n Vurdering av riktig distribusjon (Ubuntu, Fedora, Mint o.l.)\n Installasjon og full systemoppsett\n Nettleser, kontoriprogrammer og mediespillere\n 45 min intro-kurs: terminal og pakkebehandling\n \n \n \n Praktisk info\n \n Tidsbruk: 2 – 3 timer\n \n \n \n \n\n \n \n Nytt liv-pakke (Linux)\n 950,-+ Legg til\n \n \n Puster nytt liv i gamle PC-er som er for trege for Windows. Passer godt til nettsurfing, YouTube, e-post, nyheter og daglig bruk. I tillegg til å sette opp skrivebordsmiljøet installerer jeg alle programmene du trenger — tilpasset dine behov\n \n Hva inngår\n \n Lettdistribusjon med gnome-shell eller enkel WM\n Nettleser, YouTube og e-post ferdig satt opp\n Installasjon av programmer tilpasset ditt bruk\n \n \n \n Praktisk info\n \n Tidsbruk: 1,5 – 2,5 timer\n \n \n \n \n\n \n \n Oppsett av ny PC\n 850,-+ Legg til\n \n \n Ny PC satt opp skikkelig fra start — bloatware fjernet, drivere oppdatert, programmer installert\n \n Hva inngår\n \n Fjerning av forhåndsinstallert bloatware\n Oppdatering av Windows og alle drivere\n Installasjon av nettleser, Office-alternativ, antivirus\n Oppsett av brukerkontoer\n \n \n \n Praktisk info\n \n Tidsbruk: 1 – 1,5 timer\n Programlisenser inngår ikke\n \n \n \n \n\n \n \n Mac-oppsett og feilsøking\n 400,-+ Legg til\n \n \n Begrenset erfaring med macOS, men jeg kan hjelpe med grunnleggende oppsett, innstillinger og feilsøking. Ta kontakt først — så vurderer jeg om jeg kan hjelpe\n \n Hva inngår\n \n Oppsett av ny Mac og Apple ID\n Grunnleggende systeminnstillinger\n Tilkobling til nettverk og skriver\n \n \n \n Praktisk info\n \n Tidsbruk: 30 – 60 min\n \n \n Lav pris reflekterer begrenset erfaring — kontakt meg først for å avklare\n \n \n\n \n\n\n\n\n \n \n Mobil\n \n \n\n \n \n iPhone / iPad\n 400,-+ Legg til\n \n \n Hjelp med oppsett, Apple ID, backup og vanlige programvareproblemer på iPhone og iPad\n \n Hva inngår\n \n Oppsett av Apple ID og iCloud\n Backup til iCloud eller PC\n Overføring av data til ny enhet\n Feilsøking i apper og innstillinger\n \n \n \n Praktisk info\n \n Tidsbruk: 30 – 60 min\n Fysisk skjerm- eller batteribytte tilbys ikke\n \n \n \n \n\n \n \n Android\n 400,-+ Legg til\n \n \n Oppsett, Google-konto, backup og vanlig feilsøking på Android-enheter\n \n Hva inngår\n \n Oppsett av Google-konto\n Backup og overføring til ny enhet\n Fjerning av unødvendige apper\n Feilsøking i apper og innstillinger\n \n \n \n Praktisk info\n \n Tidsbruk: 30 – 60 min\n Fysisk skjerm- eller batteribytte tilbys ikke\n \n \n \n \n\n \n\n\n\n\n \n \n Datagjenoppretting\n \n \n\n \n \n Dataredning & Backup\n Fra 700,-+ Legg til\n \n \n Gjenoppretting av slettede filer og oppsett av backup-løsning\n \n \n Hva inngår\n \n Filgjenoppretting med spesialisert programvare\n Oppsett av lokal og skybasert backup\n \n \n \n Praktisk info\n \n Tidsbruk: 1 – 3 timer (avhenger av diskstørrelse)\n \n \n \n \n \n\n \n \n Filredning fra skadet PC\n 700,-+ Legg til\n \n \n PC starter ikke men disken er intakt? Jeg kobler ut disken og henter ut filene du trenger\n \n Hva inngår\n \n Demontering og uttrekk av harddisk / SSD\n Kopiering av ønskede filer til ekstern lagring\n 700,- per disk — +450,- per ekstra disk\n \n \n \n Praktisk info\n \n Tidsbruk: 1 – 2 timer\n \n \n \n \n\n \n \n Reset av Windows-passord\n 450,-+ Legg til\n \n \n Låst ute av din egen PC? Jeg hjelper deg inn igjen uten å slette data\n \n Hva inngår\n \n Tilbakestilling av lokal Windows-konto\n Microsoft-konto gjenoppretting\n Ingen data slettes\n \n \n \n Praktisk info\n \n Tidsbruk: 30 – 60 min\n \n \n \n \n\n \n\n\n\n\n \n \n Rådgivning\n \n \n\n \n \n Digital kjøpshjelp\n 500,-+ Legg til\n \n \n 3 beste alternativer med direktelenker tilpasset ditt budsjett. Spar tusenlapper på riktig valg\n \n Hva inngår\n \n Gjennomgang av ditt behov og budsjett\n 3 konkrete produktanbefalinger med lenker\n Skriftlig sammenligning med fordeler og ulemper\n \n \n \n Praktisk info\n \n Tidsbruk: 1 – 1,5 timer\n \n \n \n \n\n \n \n Fjernstyring/Rask hjelp\n 550,-/time+ Legg til\n \n \n Hjelp via AnyDesk når du foretrekker fjerntilkobling. Du betaler kun for brukt tid — 275,- per påbegynt halvtime\n \n Hva inngår\n \n Oppsett av e-postkonto\n Feilsøking i programmer\n Skriveroppsett og driverproblemer\n Rask optimalisering eller opprydding\n \n \n \n Praktisk info\n \n 275,- per påbegynt halvtime\n Betaling gjelder den tiden jeg bruker på oppgaven\n Ikke egnet for oppgaver som krever fysisk tilgang til maskinen\n \n \n \n \n\n \n\n\n\n\n \n \n Dokument & Kontor\n \n \n \n Presentasjon & Rapport\n Fra 650,-+ Legg til\n \n \n Strukturering og profesjonell utforming av presentasjoner. PPTX, Google Slides eller PDF\n \n \n Hva inngår\n \n Strukturering av innhold og lysbilder\n Visuell utforming med konsistent og attraktivt design\n Tilpasning til merkevare eller personlig stil\n \n \n \n Prising\n \n Fra 650,- for opptil ~10 slides\n Større prosjekter avtales etter omfang\n Gratis vurdering for oppstart\n \n \n \n \n Teknisk Quarto-rapport— Fra 1 200,- / etter avtale\n R/Python, interaktive grafer og LaTeX. For bachelor-/masteroppgaver og SMB-dokumentasjon\n \n \n \n\n\n\n\n \n \n Betingelser\n \n\n \n\n \n \n Hos deg (Bergen / Os)\n 350,-\n \n \n Gjelder ved fysisk oppmøte hos deg hjemme eller på kontoret i Bergen og Os. Prisen er fast uavhengig av antall tjenester som utføres på samme besøk\n \n Hva inngår\n \n Én fast pris per besøk — uansett antall tjenester\n Dekker reise til og fra deg i Bergen og Os\n \n \n \n \n\n \n \n Student- og pensjonistrabatt\n 15 %\n \n \n Studenter og pensjonister får 15 % rabatt på alle tjenester. Rabatten gjelder ikke på oppmøtekostnaden\n \n Hvem gjelder det?\n \n Studenter med gyldig studentbevis\n Pensjonister (ingen dokumentasjon nødvendig)\n \n \n \n Praktisk info\n \n Gjelder ikke oppmøtekostnad (350,-)\n \n \n \n \n\n \n \n Ingen løsning – ingen betaling\n \n \n \n Hvis jeg ikke klarer å løse problemet ditt, betaler du ingenting. Ingen skjulte gebyrer, ingen risiko for deg\n \n Hva gjelder\n \n Gjelder alle tjenester\n Feilsøking og diagnose er alltid gratis uansett\n \n \n \n \n\n \n \n Gratis førstekonsultasjon\n Gratis\n \n \n Usikker på hva du trenger? Vi snakker om problemet og finner riktig løsning\n \n Hva inngår (avhenger av situasjonen)\n \n Samtale om dine behov og forventninger\n Ved ytelsesproblemer: systemlogg, oppstartstid, S.M.A.R.T.\n Ved maskinvarebytte: informasjon om prosessen\n Ved nyoppsett: kartlegging av programbehov\n Anbefaling av riktig tjeneste og estimert pris\n \n \n \n Praktisk info\n \n Tidsbruk: 10–30 min\n Du bestemmer selv om du vil gå videre\n \n \n \n \n\n \n\n\n\n\n\n \n Facebook Messenger\n \n\n\n\n\n\n \n Bestilling\n 0\n\n\n\n\n\n\n \n Din bestilling\n \n \n\n \n \n \n \n Ingen tjenester valgt ennå.Klikk + Legg til på en tjeneste.\n \n \n\n \n Estimert total\n 0,- NOK\n \n\n \n\n \n\n \n Hvordan ønsker du hjelpen? *\n \n \n \n \n \n \n Fjerntilkobling\n Gratis\n \n Via AnyDesk — last ned på anydesk.com før vi starter\n \n \n \n \n \n \n \n Hos deg\n +350,–\n \n Jeg kommer til deg i Bergen eller Os\n \n \n \n \n \n \n \n Hos meg\n Gratis\n \n Du kommer til meg — adresse sendes etter avtale\n \n \n \n \n Din adresse *\n \n \n\n \n\n \n \n Rabatt\n \n \n \n \n Student- / pensjonistrabatt\n 15 % rabatt — gjelder ikke besøksgebyret\n \n \n \n –\n + Legg til\n \n \n \n \n \n\n \n Kontaktinformasjon\n \n Navn *\n \n \n \n E-post *\n \n \n \n Telefon\n \n \n \n Beskriv problemet ditt\n \n \n \n \n\n \n \n Send bestilling\n \n \n\n \n ✅\n Bestilling mottatt!\n Jeg kontakter deg så snart som mulig.Her er din ordrebekreftelse:\n \n Ordrenummer\n -\n \n \n Lukk"
},
{
"objectID": "index.html",
"href": "index.html",
"title": "Welcome to my space!",
"section": "",
"text": "Data Analysis, IT Solutions & Automation.Bridging the gap between data and actionable insights.\n \n View Portfolio\n More About Me\n \n \n\n\n\n\nRecent Work\n\n\nA selection of my latest projects and publications.\n\n\n\n\n\n\n\n\n\n\nInteractive Todo.txt Tree Viewer\n\n\n\n14 Mar 2026\n\n\n\n\n\n\n\n\n\n\n\nBinol vs Olive Sales Analysis\n\n\n\n14 Dec 2024\n\n\n\n\n\n\n\n\n\n\n\nOCR Reader\n\n\n\n06 Dec 2024\n\n\n\n\n\n\n\n\n\n\n\nWeb Scraping Job Vacancies\n\n\n\n20 Jul 2024\n\n\n\n\n\n\n\n\n\n\n\nBeautiful Export to Excel (xlsx)\n\n\n\n10 Jul 2024\n\n\n\n\n\n\n\n\n\n\n\nCheese Sales Analysis\n\n\n\n06 Jul 2024\n\n\n\n\n\n\nNo matching items"
},
{
"objectID": "about-ru.html",
"href": "about-ru.html",
"title": "Владимир Шевченко",
"section": "",
"text": "Аналитик данных и IT-специалист\n\n\n Telegram Email\n\n Скачать CV\n\n\n\nКраткая информация\n\n\n\n Местоположение: Бьёрнафьорден, Норвегия\n\n\n Дата рождения: 1997\n\n\n Статус: Открыт для предложений\n\n\n Водительские права: Категория B\n\n\n\n\n\nЯзыки\n\n\n \n Норвежский\n Средний\n \n \n \n \n\n\n \n Английский\n Базовый\n \n \n \n \n\n\n \n Украинский\n Родной\n \n \n \n \n\n\n \n Русский\n Родной\n \n \n \n \n\n\n\n\n\n\nРезюме\n\n\nАналитик данных и IT-специалист с богатым опытом в автоматизации процессов, системах отчётности и анализе рисков.\n\n\nСпециализируюсь на трансформации сложных бизнес-процессов в автоматизированные решения с использованием Python, R и SQL. Мой опыт охватывает экономику, управление рисками в банковской сфере и техническую инфраструктуру, что даёт мне уникальное понимание как бизнес-потребностей, так и технической реализации.\n\n\nЦеню честность, ответственность и точность в работе. Я открыт к обучению, любознателен и готов осваивать новые навыки. В свободное время люблю проводить время на природе, с семьей и друзьями.\n\n\n\n\nОпыт работы\n\n\n\n\n\n\n\nЭкономист\n\n\n Мелитопольский молочный завод, Украина\n\n\n Июнь 2023 – Дек 2023\n\n\nIT-специалист с экономическим уклоном, ответственный за консолидацию данных подразделений и построение единой инфраструктуры отчётности.\n\n\n\nКлючевые достижения:\n\n\n\nРазработал комплексную систему расчёта себестоимости продукции на R, проанализировал все подразделения завода, включая закупочные цены, состав продуктов, зарплаты, логистику, коммунальные расходы и амортизацию оборудования\n\n\nПостроил единую систему отчётности с базой данных SQLite, консолидировав фрагментированные данные из нескольких отделов и систем 1C\n\n\nАвтоматизировал визуализацию продаж и аналитику с помощью R, обеспечив динамические отчёты за любой период времени\n\n\nВнедрил историческое отслеживание запасов и продаж в денежном выражении, обеспечив мгновенный доступ к прошлым данным\n\n\nПровёл анализ конкурентов и расположения магазинов, выявил закономерности продаж и предпочтения клиентов\n\n\n\n\nPython R SQL SQLite 1C Визуализация данных\n\n\n\n\n\n\n\n\nСпециалист по управлению рисками\n\n\n Forward Bank LLC, Украина\n\n\n Ноябрь 2020 – Окт 2022\n\n\nОтветственный за анализ рисков, верификацию клиентов и автоматизацию отчётности по 5 банковским продуктам.\n\n\n\nКлючевые достижения:\n\n\n\nАвтоматизировал крупномасштабные ежемесячные отчёты по рискам (продажи, коэффициенты одобрения, дефолты) путём миграции с Excel на R, обрабатывая множество продуктов и клиентских сегментов\n\n\nРазработал автоматизированную систему верификации клиентов с проверкой чёрных списков, баз кредитных историй (БКИ/ПТИ) и анализом доходов/расходов\n\n\nСоздал пользовательские скрипты проверки на основе требований руководства, преобразуя бизнес-правила в автоматизированные проверки кода\n\n\nРазработал и внедрил новые метрики рисков в сотрудничестве с руководителем отдела, успешно интегрировал в рабочий процесс\n\n\nПеревёл большинство задач на основе Excel на R + SQL, достигнув полной автоматизации большинства процессов\n\n\n\n\nR SQL Oracle DB Обработка данных Анализ рисков\n\n\n\n\n\n\n\n\nТехнический специалист\n\n\n Band, Украина\n\n\n 2016 – 2018\n\n\nОбеспечивал VPN-инфраструктуру и техническую поддержку для ~10 пользователей, обслуживал виртуальные машины и обеспечивал бесперебойную работу сервиса.\n\n\n\nКлючевые достижения:\n\n\n\nУправлял и обслуживал 10 виртуальных машин (Windows/VirtualBox) с конфигурацией VPN, обеспечивая бесперебойную работу для пользователей\n\n\nПеревёл виртуальные машины с Windows на Linux на Google Cloud Platform, значительно снизив затраты на ресурсы и увеличив количество экземпляров в рамках того же бюджета\n\n\nНастроил облегчённое программное обеспечение (например, браузер Midori) для сред с ограниченной оперативной памятью\n\n\nОбеспечивал удалённую техническую поддержку пользователей на платформах Windows и Linux\n\n\n\n\nLinux Windows VirtualBox Google Cloud Platform VPN\n\n\n\n\n\n\n\nОбразование\n\n\n\nБакалавр компьютерных наук\n\n 2016 – 2020\n\n\n Таврический государственный агротехнологический университет имени Дмитрия Моторного\n\n\nИнформационные технологии, Очная форма\n\n\n\n\n\n\n\n Back to top"
}
]