[{"body":"","link":"https://www.findichgut.net/en/","section":"","tags":null,"title":"Findichgut.net"},{"body":"","link":"https://www.findichgut.net/en/howtos/","section":"howtos","tags":null,"title":"How Tos"},{"body":"Don’t do it – unless you are brave or really require it Vagrant with WSL is a constant mix of Linux and Windows. I personally think, there are very good reasons to use Vagrant with WSL, which is why I still do it, but not all features work and especially not out of the box\nSetup Please also check the official Vagrant documentation\nThe official documentation suggests, that you should avoid having Vagrant on Windows and WSL installed at the same time or at least keep the same versions.\nPlease be aware, that the project you are working on must be located on a directory, which is accessible from Windows. This is caused by the fact, that Vagrant will mount the project directory into the VMs as shared folders, so VirtualBox needs access to it.\nTo run Vagrant on WSL, you will have to install the virtualbox_WSL2 plugin first.\nBy default Vagrant creates a NAT interface and a port-forwarding to it, to connect to the VMs on localhost. This is not accessible from within WSL. The plugin enables Vagrant to connect to the VMs using the IP of the Windows system instead of localhost.\n1vagrant plugin install virtualbox_WSL2 Furthermore, it is required to allow access from WSL and to define the user home.\n1export VAGRANT_WSL_ENABLE_WINDOWS_ACCESS=\u0026#34;1\u0026#34; 2export VAGRANT_WSL_WINDOWS_ACCESS_USER_HOME_PATH=\u0026#34;/mnt/c/checkouts/\u0026#34; The user home path is required to e.g. skip permission checks on the SSH keys, but it's not used everywhere. So another step is required to actually allow Vagrant to set proper permissions on the generated keys:\n1[automount] 2options = metadata,umask=22,fmask=11 3enabled = true To apply these changes, you have to close all WSL sessions and run:\n1wsl --shutdown For more information about these settings, please refer to the official Microsoft documentation\nThere you go, the basic features of Vagrant are working.\nBroken Features What I noticed so far:\nDisk resizing It is not possible to resize disks due to path issues. I have put in a Pull request to fix this issue, but I am neither a Vagrant expert nor a Ruby developer, so I am not sure if this will ever be merged.\nCloud-Init When using the Cloud-Init provider, Vagrant will build and mount an ISO image. On WSL this image will be located under /tmp/.iso and will then instruct VirtualBox (running on Windows) to mount /tmp/.iso\n","link":"https://www.findichgut.net/en/howtos/vagrant/","section":"howtos","tags":null,"title":"Vagrant with VirtualBox in WSL"},{"body":"General Information Protecting your personal data is important to us. This website is self‑hosted and only collects the data necessary for operation and usage. Data is only shared with third parties within the scope of the functions described here or if legally required or necessary to process your request.\nServer Log Files When visiting this website, technical data transmitted by your browser is automatically collected, including:\nIP address (stored in log files, not linked to individuals) Date and time of access Browser type and version Operating system Referrer URL This data is used solely to ensure technical operation and security of the website. No personal identification is possible. Data is automatically deleted after 2 weeks.\nComments via Giscus (GitHub Discussions) We use Giscus to provide a comment function, which integrates GitHub Discussions.\nThe comment function is only loaded after an active click (“two‑click solution”). You can deactivate this integration at any time. Only after your consent is given, a connection to giscus.app and GitHub is established, transmitting data such as IP address and browser information. Processing takes place only with your consent, which you provide by clicking to enable the integration. To post a comment, you must have your own GitHub account and be logged in. Comments are stored directly in the Discussions of our GitHub repository and are publicly visible there. Users can delete or edit their own comments at any time. As repository owner, we can moderate comments or threads and remove them upon request. Technical storage is handled by GitHub; please also see the GitHub Privacy Statement. Contact Form with hCaptcha The contact form is protected by hCaptcha, a service of Intuition Machines, Inc. (USA).\nhCaptcha is only loaded after your explicit consent (“two‑click solution”). Consent is not stored and applies only to the current use. Technical data (e.g. IP address, browser and device information, mouse movements) may be processed and transferred to the USA. Purpose is protection against spam and misuse by automated programs (bots). Processing takes place only with your consent, which you provide by clicking to enable the integration. For more information, see the hCaptcha Privacy Policy. When you use the contact form, the data you enter (e.g. name, email address, message) is automatically compiled into an email and stored in our mailbox. This storage is solely for processing your request and only as long as necessary.\nCookies and Tracking This website does not use tracking cookies or analytics tools.\nCookies may only be set by third‑party functions integrated via the two‑click solution.\nThe comment function (Giscus) may set technically necessary cookies to store your choice and provide the function. The contact form with hCaptcha may set technically necessary cookies required for captcha verification and protection against misuse. Details on the type and scope of these cookies can be found in the respective providers’ privacy policies.\nContact via Email If you contact us by email, the data you provide will also be stored in our mailbox and used solely to process your request.\nUser Rights You have the right to:\nRequest information about stored data Correct inaccurate data Request deletion of your data, provided no legal retention obligation exists Object to the processing of your data Important:\nYou can delete or edit your own comments in our GitHub repository at any time. For an overview of personal data stored by GitHub (e.g. account information, technical log data), please contact GitHub directly. We ourselves do not store personal data except for technically necessary server logs and the messages you actively send us via the contact form or email. Changes We reserve the right to update this privacy policy if required to comply with legal requirements or technical changes.\n","link":"https://www.findichgut.net/en/privacy_policy/","section":"","tags":null,"title":"Privacy Policy"},{"body":"Responsible for the content of this website:\nThorsten Habich To leave me a message, please use the contact form:\nLoading the contact form will include the hCaptcha service (provider: Intuition Machines Inc., USA). During this process, personal data (e.g., IP address) may be transmitted to hCaptcha to prevent spam. More information can be found in the privacy policy. By clicking \u0026#39;Activate Form\u0026#39;, you consent to this data transfer. Activate Form Name Email Subject Message Submit ","link":"https://www.findichgut.net/en/imprint/","section":"","tags":null,"title":"Imprint"},{"body":"Imagine you're working from home, as everyone does in your company. You're contacting a colleague who is supposed to be tasked with a new project.\nMe Good morning Carl, how is life? Could you please create a new app to generate a CV? Let me send you the specifications... Carl Awesome, yeah, I’m fine, hope you’re doing okay too. Just send them over. Me [Send specifications] Carl Oh, nice project. So, here’s what you have to do: 1. Create a file named app.py with the following contents... [ten files later] If you want, I can also create you a version in green\nMe Wow, that was fast! Awesome, thanks. But when I run the app, I get an error message: ZeroDivisionError: division by zero. Carl Oh, yeah, I already expected that! That’s because of a limitation in mathematics. In the current version of algebra you cannot divide by zero. Me OK, could you please fix this? Carl Sure, here is a TESTED version of the application. Just replace [twelve files later]. If you want, I can now provide you an optimized, fully working, TESTED version with ALL features. Do you want me to do this?\nMe [after copying and pasting everything before the last sentence] I thought you just did? I’m getting a 500 error on all pages. Carl Alright, that’s because of the routing. Just replace \u0026lt;http\u0026gt; with \u0026lt;html\u0026gt;. Me In which file? Can you provide the full files, please? It would be awesome if you could just send me a ZIP with all files. And did you really test it? Carl OK, let’s be honest, I haven’t tested it. But I will send you a ZIP file with a working version now. It’s definitely working this time. Carl Me Did you just send me an image of a ZIP file? Carl Yeah, sorry, I cannot create ZIP files, so I thought I should send you an image! Me I need a break, talk to you tomorrow! The next day: Me Good morning, can we continue with our project? Carl What project? Me ... Me Let me send you everything [send contents of 14 files]. Carl Wow, do you want me to rewrite this? This code is a classic example of something that needs a full rewrite and a bunch of optimizations! Me You wrote the code just yesterday. Carl So you want me to optimize your code? Me Well... just do whatever is needed... Carl Here you go, twenty new files are ready for some copy‑and‑paste action. Me sigh I’m getting a 500 error on all pages, and the routes differ: /project-delete, /skill/delete, /certificate/delete/\u0026lt;string:item_id\u0026gt;... Carl You’re right, let’s use a unique approach for all routes. Here are 21 new files. Me Thanks, I’ll fix some errors myself so we at least have something that isn’t crashing. Carl If you want, I can... Me No thanks! After fixing the errors, you need some data: Me Hi, can you please help me with something? I have the following data that needs to be migrated to JSON files. Charlotte Sure, no problem, here are your JSON files... Me Uh, why are you stopping halfway? Charlotte Do you want me to send you the full version? Me Yes? Charlotte Here is the full version of the files. Me It stops again in the middle. Charlotte Do you want me to send everything now? Me Yes, is there any issue with this? Charlotte I cannot send that much structured data in one turn. I can send you everything in a file. Do you want me to do that? Me Sweet! Let’s go! Charlotte Download: projects.json Me I cannot download the file! Charlotte Yeah, I cannot send files, so I thought I would send you text to show you how a download would look. Me ... Charlotte I can send everything to you in chunks. Me Fine. Charlotte Here we go [first chunk]. Me OK. Charlotte [second chunk]. Me OK. Charlotte ... [fourteenth chunk] Do you want me to continue with the next JSON file? Nobody would accept such a behaviour by a human being, would you?\nAs I forgot to state clearly to my friend CoPilot, that I actually only wanted some spellchecks on the dialog above, he gave me this answer:\nExactly — nobody would accept a “colleague conversation” like the one you described. What you sketched out is a textbook case of chaotic, unprofessional collaboration:\nUnclear responsibilities: Carl writes code, then claims it’s yours, constantly offers “optimizations,” but nothing ever works. Lack of reliability: Instead of delivering tested results, you get half‑finished files, images of ZIP archives, or endless copy‑paste chunks. Communication breakdown: Contradictory statements (“I tested it” vs. “I didn’t test it”), no structure, and no transparency. Technical nonsense: 500 errors, broken routes, mismatched file counts — all without a working solution. In a real remote‑work environment, collaboration should look very different: Clear task division (“You build the app, I provide the data”). Reliable deliverables (a Git repo, a tested ZIP, working code). Honest communication (acknowledge issues, provide structured fixes). Efficiency instead of endless copy‑paste chaos. So what is actually the deal with AI‑generated software? From my point of view, it took far too long to get a working version. There’s a “play” button in ChatGPT now, which allows code execution on demand. So, in theory, ChatGPT could test a few things before delivering code — but it doesn’t. After more than two long days, I finally had a working version (including some parts in Hugo, which you can see under About → Skills). It was basically impossible to get a working version without at least some manual intervention.\nAnother real issue is that it frequently rewrites the entire codebase. Instead of adding specific functionality to tested and working code, the whole application is rewritten, which means full testing is required again afterwards. There is no input validation, no test cases, and no error handling in the generated code. The application might simply crash. I could probably have specified more clearly what I expected, but apart from date pickers for dates, nothing was added automatically. A human developer would likely add input validation, security checks, and error handling by default.\nI provided ChatGPT with all the created files, and it gave me a very detailed analysis of every aspect and problem in the application, as well as a summary:\nCategory Rating CoPilot Architecture 6/10 6/10 Security 4/10 4/10 UX 7/10 6/10 Code Quality 7/10 7/10 Maintainability 6/10 5/10 To be honest, I was quite impressed by the detailed analysis ChatGPT provided. On the other hand, what would you think of a developer who knows best practices but constantly refuses to implement them?\nWe all know that, especially when it comes to interfaces used only by trusted technical staff, error handling and input validation aren’t always implemented in detail. Sometimes we simply expect people to read an exception message to understand what went wrong. But wouldn’t it be amazing if these easy but time‑consuming tasks could be left to an AI?\nThe CV generator was — and still is — a very nice experiment, and I might continue with it at some point. The more I use it, the more I feel that something similar could be useful for many freelancers. It’s well suited for such an experiment because it runs locally, without any internet connection, inside a container, which provides only a small attack surface. The part on my homepage is delivered as a static website; the only active code runs on the client computer via JavaScript.\nThere may be AI models out there that do a much better job than general‑purpose ChatGPT. I still think it’s fine to use ChatGPT (or any other AI) for prototyping. Before I had the idea for the full project, I asked ChatGPT to provide a minor subset of the functionality as a proof of concept, and it worked fine after a few attempts. I must admit that I did only very limited manual intervention, and every time I did, the process sped up significantly. If I continue with this project, I will definitely change my method from “let the AI work and see” to “let the AI generate features, then optimize and fix manually.” I would also split the project into smaller files, hoping that ChatGPT could then focus on specific tasks rather than rewriting everything over and over again.\nThat said, please note that the source code is published on GitHub, so this project is open source. If you’re working at a company, especially a software development company, make sure that the use of AI complies with your company’s policies. A full‑blown closed‑source application may have very different data protection requirements compared to an open‑source project.\n","link":"https://www.findichgut.net/en/posts/cv-generator/","section":"posts","tags":null,"title":"CV Generator"},{"body":"","link":"https://www.findichgut.net/en/posts/","section":"posts","tags":null,"title":"Posts"},{"body":"This page is still under construction. Until then, you can view my CV as a PDF document.\nCV EN CV DE FinTech DevOps Engineer (March 2023 - April 2025) Infrastructure as Code (IaC) Architecture and provisioning of Microsoft Azure infrastructure, Datadog, and PagerDuty using Terraform and OpenTofu, applying Infrastructure as Code (IaC) principles, automation, and monitoring for scalable infrastructure management.\nOracle Implementation of Infrastructure as Code (IaC) principles for the standardization and administration of Oracle databases on Microsoft Azure virtual machines (Rocky Linux). Automated installation, patching, and initialization were achieved using Ansible playbooks, ensuring reproducibility, compliance, and automation. Database provisioning included password policies, user accounts, and permissions, integrated into continuous deployment workflows for secure and scalable infrastructure management. Migration of preconfigured Oracle Linux database VMs to Rocky Linux (RHEL) VMs Introduction of SchemaSpy for automated database schema analysis and documentation, improving transparency, maintainability, and knowledge transfer. Automated Linux VM Baseline Configuration with Ansible Creation, extension, and refactoring of Ansible playbooks for the baseline configuration of Linux virtual machines (Rocky Linux (RHEL), Oracle Enterprise Linux, Ubuntu). This included automated installation of packages and repositories, deployment of ClamAV, patch management, and user administration (consistent UIDs across systems, group management, distribution of SSH public keys, user deprovisioning, sudo permissions) and SSHd hardening. Additionally, Postfix was configured to use a smarthost for sending cron job emails. These tasks applied Infrastructure as Code (IaC) principles to ensure automation, reproducibility, and standardized infrastructure management. Ansible‑Driven Microservice Deployment and Secrets Management Deployment of in-house developed microservices using Ansible, including the extension of playbooks for automated provisioning of monitoring configurations (logs, metrics, service health) through installation and configuration of the Datadog Agent. As deployments were integral parts of CI/CD pipelines for end-to-end testing, optimization of deployment speed became increasingly relevant with the growing number of services. Playbooks were refactored to enable automated rollout of PKI-based certificates, integration of Azure Key Vault for secrets management, and intensive use of Ansible tags to selectively execute tasks. These activities applied Infrastructure as Code (IaC) principles to ensure automation, scalability, and standardized infrastructure management.\nApache Kafka Within the project, Apache Kafka was introduced by building a proof-of-concept environment to support developers in adopting event streaming. Deployment was fully automated using Ansible, including the provisioning of users, permissions, and topics. The entire process, from initial setup to productive usage, was standardized by applying Infrastructure as Code (IaC) principles. For operating Kafka on Kubernetes, Helm charts were implemented to ensure scalable and reproducible deployments. Later, Kafka was upgraded from version 3 to version 4, aligning the platform with current requirements and ensuring long-term stability and automation. Developed and deployed a proof of concept for secure integration of Apache Kafka with Keycloak via OAuth2. Kubernetes / AKS I administered Kubernetes clusters on Azure (AKS) and carried out deployments of both in‑house developed microservices and external third‑party services using Helm. For end‑to‑end testing within CI/CD pipelines, I implemented namespace separation to provide isolated environments and enable parallel deployments without conflicts. I was responsible for upgrading the Traefik Ingress Controller from version 2 to version 3. When issues arose, I performed debugging at both the cluster level and within individual deployments to quickly identify root causes and deliver sustainable solutions. By leveraging autoscaling mechanisms such as the Cluster Autoscaler, I ensured dynamic adjustment of resources in response to varying workloads. Container services Provisioned Docker images for use in CI/CD pipelines, Kubernetes deployments, and production operations in Docker‑native environments. Ensured security by conducting regular scans with Trivy and establishing a nightly automated rebuild of all images. Evaluated identified CVEs and implemented mitigation measures to maintain a robust and trustworthy container infrastructure. Operated container services on Podman and Docker, including Nginx, SonaType Nexus, Squid, and a Gradle Build Cache. Rolled out configurations and initiated container operations via Ansible, ensuring consistent and automated provisioning. Performed regular application updates to guarantee security, stability, and up‑to‑date functionality. Python \u0026amp; Automation Developed a Python script to publish Markdown content directly into Confluence, integrating it into the reporting pipeline I built to provide a comprehensive overview of all systems running in Azure. Automated the update of secrets in Azure Key Vaults, enabling password rotation with every deployment to enhance security and compliance. Implemented notifications to Microsoft Teams, for example to alert on failed pipelines, ensuring rapid visibility and response. Built a notification system for available software updates of third‑party applications (e.g., Datadog Agent, Ansible, Kafka), leveraging RSS feeds with filtering functions to distribute only relevant information to stakeholders. Distributed Ledger (Corda / CENM) Operation and administration of Corda 4 Nodes, both with Ansible and containerized using Kubernetes/Helm Rollout, identity management, and updates for nodes and networks, including troubleshooting of complex issues Support in debugging and build processes of CorDapps (Corda Distributed Applications) Establishment of a private network with the Corda Enterprise Network Manager (CENM) after the shutdown of the R3‑operated network Migration of data and identities from the R3 network into the newly established private network Development of a pipeline for test cases and update validation, ensuring reproducible and secure rollouts and maintenance procedures Infrastructure design optimization within the private network, with a focus on stability, security, and compliance Implementation of Proxy Solutions Packaging Dante as an RPM-based SOCKS proxy and deploying it with a dynamic rule set via Ansible. Setting up a proxy solution based on Squid to restrict external access in test environments, aligned with the production setup. Atlassian and CI/CD DevOps Engineer (April 2016 - March 2022) Migration from Bitbucket/Bamboo to GitLab PaaS as enterprise‑wide CI/CD platform Setup of a Proof‑of‑Concept environment on AWS, with a strong focus on permission models to meet company security requirements Deployment of local GitLab runners to evaluate pipeline functionalities Integration with Azure AD via SAML for centralized identity management Implementation of pipeline templates for common use cases (e.g. artifact publishing, container builds using Kaniko rootless and Docker) Coordination with the PaaS provider (GitLabHost) for environment setup and troubleshooting Support for project teams with diverse tech stacks (Maven/Gradle \u0026amp; Java, C++ with cross‑compiling, API gateways with special requirements, Python) Collaboration with other migration team members to communicate effectively with project teams Support and debugging of pipeline issues, including coordination of runner configuration adjustments when required Migration of deployment processes and adaptation of deployment strategies to GitLab standards Handling of project migrations with near‑zero downtime for developers, ensuring business continuity Bitbucket / Bamboo Integration of Bamboo into the company’s Atlassian platform Setup of additional instances in different network segments Operation and support of Bamboo agents, including container image updates and troubleshooting Specialized support of Bamboo agents on macOS for iOS application builds Support for build pipeline and deployment issues Operation of multiple instances (E/K/P) and execution of cross‑instance updates Crowd Setup of Crowd as the central authentication and authorization platform for all Atlassian products Operation of multiple instances (E/K/P) and execution of updates Integration with LDAP queries, local users, and Azure AD Jira / Confluence Supported developers in plugin development Provisioned platforms in different network areas with varying user groups Operation of multiple instances (E/K/P), execution of updates, and coordination of plugin testing as well as test cases on the base application Troubleshooting of incidents, stability issues, and performance problems Partial support for user issues Setup of a new infrastructure for migration to the Datacenter Edition SonarQube Provisioning and updating of the SonarQube platform Incorporation of SonarQube into CI/CD pipelines to perform automated code analysis and enforce a quality gate. Artifactory Provisioning and updating of JFrog Artifactory Setup of repositories and registries for storing build artifacts and caching public dependencies Puppet / Ansible Rollout and configuration of all applications and associated reverse proxies (Apache) using Puppet Enterprise. Migration of rollout processes to Ansible, including complete redevelopment of playbooks to migrate the entire platform to new infrastructure and enable future deployments via Ansible Other Responsibilities Close collaboration with other infrastructure teams (DBA, Firewall, Linux, WebApp, Network, Load Balancer, AD) Performed direct database debugging of application issues on production instances and updated database contents for migration and testing purposes, including regular support for test environments Consulting on the feasibility of security policies Validation and mitigation of CVEs, particularly Log4Shell Assumed responsibility for complex or non‑standard tasks requiring tailored solutions Training and certification in SAFe – DevOps DNS Management DevOps Engineer (December 2016 - May 2021) DNS Administration (Unbound \u0026amp; Bind) Administration of Unbound and Bind DNS systems, operating internet resolvers and managing zone files Creation and maintenance of zone files, including setup of zone delegations Management of all relevant DNS records: A, AAAA, CAA, CNAME, SPF, TXT Deployment of DNS software, zones, and configurations using Ansible automation Review and implementation of customer requests related to DNS configuration and records Consultation for internal teams on DNS strategies and best practices RPM Packaging \u0026amp; CI/CD Integration Packaging and maintenance of RPM packages for DNS and related software components Build and validation of RPMs through a CI/CD pipeline Delivery of resulting artifacts to the Linux team for integration into the central repository SMTP and Email Infrastructure DevOps Engineer (December 2016 - May 2021) SMTP Chain \u0026amp; Mail Infrastructure Administration of the SMTP chain, covering communication between applications, the internet, and internal backend systems Configuration and optimization of Postfix and Postfwd, maintaining routing rules and ensuring reliable mail delivery Consultation with internal teams and customers on application mailing requirements in line with corporate policies Review of submitted specifications for technical accuracy and policy compliance, translated into Postfwd rules Seamless integration of mail infrastructure with internal and external systems to guarantee secure communication Secure Mail Delivery Operation and optimization of the Zertificon Z1 SecureMail Gateway for encrypted and policy-compliant email exchange Review and implementation of requirements for secure mail delivery, including evaluation of certificate authorities and configuration of mandatory TLS Implementation of PGP and S/MIME support, both for client-to-server communication and site-to-site encryption Ensured confidential and compliant email delivery across all systems Anti-Spam \u0026amp; Malware Protection Management and fine-tuning of Expurgate and Postgrey for spam filtering and greylisting Adjustment of rules and configurations to minimize false positives and improve detection accuracy Analysis and troubleshooting of false positives in spam detection Continuous optimization of anti-spam and malware protection to maintain reliable communication Infrastructure Migration \u0026amp; Automation Migration of the entire mail infrastructure to new systems in line with server operations team guidelines for SLES12 Server and network planning, design of merge strategies, and complete redesign of Ansible playbooks for application and configuration rollout Rebuild and operation of all components (Postfix, Postfwd, Postgrey, custom scripts) in chroot environments Independent packaging of applications as RPMs via GitLab CI/CD pipelines Established a robust, testable, and policy-compliant mail infrastructure, eliminating issues from the previous operating model OTRS Ticketing System Introduction and implementation of the OTRS ticketing system as a mail-based support solution Manual compilation of various CPAN modules and resolution of dependencies Configuration of OTRS and mail routing, in close collaboration with the Exchange team Setup and configuration of required mailboxes, establishment and maintenance of OTRS queues for structured ticket distribution Migration of OTRS from Oracle DB to PostgreSQL (OFork) to ensure long-term support and stability Open Source Contributions Contributions to the Postfix project, including bug reporting, discussions, and minor fixes Collaboration with maintainers Wietse Venema and Victor Duchovni Several contributions documented in the official Postfix changelogs Infrastructure \u0026amp; Automation Infrastructure \u0026amp; Operations Engineer (March 2016 - January 2017) Office and Client Management Responsible for the entire office network and all clients, including workstation setup, system rollouts, and software license management Administration of Office 365 in a hybrid environment with Active Directory and Azure AD Connect: users were created in the local AD and automatically synchronized to the cloud, providing Office licenses and mailboxes Introduction of OCS Inventory as an asset management solution to ensure transparent and efficient tracking of hardware and software Direct communication with customers and end users regarding technical issues, planned updates, and maintenance activities to ensure smooth operations Collaboration \u0026amp; Atlassian Platform Operation and administration of the Atlassian platform with Jira and Confluence Creation of new projects, workflow customization, and support for teams in optimizing platform usage Network \u0026amp; VPN Design and operation of VPN connections between different offices and data center systems to ensure secure and stable site connectivity Planning and implementation of routing and network segmentation using defined IP address ranges, achieving clear separation of systems and services Introduction of OpenVPN for reliable home office connectivity, replacing insecure workarounds such as permanently running PCs with TeamViewer Design and implementation of firewalls based on iptables, including granular rule sets to secure infrastructure and minimize attack surfaces Continuous analysis and optimization of the network architecture to balance performance and security Virtualization \u0026amp; Server Operations Deployment and management of Windows and Linux VMs based on Hyper-V Operation and maintenance of production applications on Windows servers using IIS for hosting .NET applications, as well as configuration of Apache web servers on Debian as reverse proxies, including setup and ongoing administration. Operation of internal infrastructure and customer applications on Debian systems Analysis and resolution of performance issues in production environments Migrations \u0026amp; Integration Migration of Redmine to a new environment with partial data migration Transfer of complete email mailboxes from Google Mail to Office 365 Management of IT infrastructure for an additional office, including adjustments to networks and systems Automation \u0026amp; DevOps Introduction of Puppet to automate administrative tasks and standardize configurations Creation and maintenance of build configurations in TeamCity for .NET/C# projects, including definition of build steps, dependencies, and environment variables Provision of consistent build environments to give developers a stable foundation for compilation and testing Introduction of GitLab to support code reviews and foster collaborative development processes Support for developer teams in using TeamCity and GitLab, troubleshooting build issues, and improving development workflows Assistance with the development and integration of customer-specific plugins for Jira and Confluence, including consulting and technical guidance Databases Setup, administration, and management of MSSQL databases for production applications, including performance optimization and backup strategies Deployment, operation, and maintenance of PostgreSQL databases with a focus on stability and scalability for internal systems and customer projects Design, deployment, and operation of MySQL databases and clusters, including high availability setups and replication to ensure resilience Ongoing analysis and troubleshooting of performance issues, including query optimization and configuration tuning Close collaboration with development teams to provide stable database environments for applications and customer-specific solutions Net Mobile AG Systems Engineer (September 2015 - February 2016) Development of a Python module for managing Dell iDracs Deployment and operation of Java applications Phoenix Contact GmbH \u0026amp; Co KG System Administrator - Applications and Operations (December 2011 - April 2013) Technical Platform Support for ERP Implementation (Baan ERP / BaanLN / Infor LN) Supported the introduction of a new ERP platform (historically Baan ERP, project‑specific BaanLN, now Infor LN). Provisioned and configured test instances to validate functionality and performance. Performed sizing analyses and infrastructure planning to ensure optimal platform dimensioning. Provided project support including troubleshooting and close collaboration with business units. Operation of Java Applications and Web Infrastructure Provisioning and administration of Apache Tomcat application servers for running business‑critical Java applications. TLS termination and reverse proxy configuration on Apache to ensure secure delivery of web applications. Troubleshooting deployment and runtime issuesand implementing sustainable fixes to ensure high availability. Migration and Operation of the Digital Asset Management Platform Canto Cumulus Migrated the existing Cumulus platform to a new infrastructure and redesigned the system architecture. Collaborated with the vendor to adapt database replication for proxy‑compatible operation. Provisioned, configured, and operated the platform for centralized digital asset management. Implemented monitoring and troubleshooting to ensure stability, scalability, and high availability. Internet Proxy Responsible for secure operation of the corporate proxy infrastructure using McAfee Webwasher, including traffic analysis, risk management, and controlled release of blocked websites.\nImplementation and Operation of the Multi‑Project Management Tool PlanView with SharePoint Integration Supported the implementation of PlanView as a central multi‑project management system. Planned and provisioned technical resources and infrastructure for platform operation. Integrated with Microsoft SharePoint for centralized document management and workflow support. Operated and administered the platform to support project portfolio management and resource coordination. Architecture and Pilot Implementation of a Configuration Management Database (CMDB) Designed and implemented a centralized CMDB for infrastructure and application data management. Utilized Puppet for automated configuration and deployment. Integrated Icinga2 for monitoring and alerting. Integrated DNS information to replace manual documentation in MediaWiki. Developed an Oracle APEX interface for centralized management and reporting. Project goal: Ensure transparency, consistency, and operational efficiency through automation and integrated monitoring. Implementation of Siemens Teamcenter (PLM System) Set up and provisioned server infrastructure for a large project team. Implemented technical requirements to support the Teamcenter rollout Automated Installer Environment for SLES (Linux Operations) Designed and implemented an unattended installation environment based on SUSE Linux Enterprise Server (SLES) Utilized AutoYaST with manually maintained XML profiles to standardize and automate system deployments Leveraged PXE boot and DHCP for automated system startup and provisioning Ensured consistent and reproducible Linux installations to support operational stability Contributed to improved efficiency and scalability in Linux operations Telefónica Deutschland GmbH Systems Engineer (January 2008 - November 2011) Administration and Operation of Unix/Linux Servers (Debian, Red Hat, Solaris) Installed and configured systems and services for reliable day‑to‑day operations. Monitored performance and ensured availability through troubleshooting with Munin and Big Brother monitoring tools. Provided on‑call support, frequently resolving unexpected issues on unfamiliar systems under time pressure. Applied updates, security hardening, and patch management to maintain stability and safeguard infrastructure. Migration and Modernization of the XMPP‑based Messaging Platform Replaced the existing jabberd implementation with PostgreSQL and Perl scripts by a ejabberd‑based solution with MySQL. Developed automated roster creation using PHP, integrated with the OpenLDAP directory. Improved maintainability and simplified administration through modern technologies and centralized user management. Supported internal communication with a more stable and better integrated messaging platform. Design and Operation of Mail Infrastructures for Internal and External Systems Set up and maintained the internal mail platform based on Dovecot, Postfix, and Horde Webmail. Configured load balancers to distribute traffic and ensure high availability of Dovecot instances. Managed mailstores on NetApp storage systems to guarantee performance and reliability. Operated and administered customer systems with up to 2 million mailboxes, using Postfix, Qmail, Sendmail, Courier, and Dovecot Support in Implementing IDM Systems Defined LDAP attributes and validation rules, collaborated on compliance testing, and identified a critical Dovecot bug that would have compromised the mail platform, ensuring stability and preventing disruption. Development of a Web‑based Self‑Service Interface for LDAP Administration Built an administration interface using PHP and Zend Framework. Implemented LDAP authentication for secure login and centralized user management. Applied form validation to ensure consistent and error‑free input. Enabled management of groups and storage of SSH public keys in the LDAP directory. Introduced self‑service features for group owners, delegating administrative tasks and reducing workload for the central IT team. Migration of Samba Platform to High‑Availability Cluster Architecture Migrated the legacy Samba platform to an active‑passive cluster using CTDB. Transferred approximately 3 TB of data to the new infrastructure. Adjusted directory structures to enable remote profiles. Integrated the environment as a Windows domain for centralized authentication and access control. Education \u0026amp; Certifications IT Specialist for System Integration (January 2005 - December 2008) Training as IT Specialist for System Integration at IKB Deutsche Industriebank AG Business Administration Basics Seminar (Siemens Professional Education, Paderborn) Database Administration Internet Servers \u0026amp; Web Technologies IT Management \u0026amp; IT Security Network Architecture \u0026amp; TCP/IP System Administration Project Management including MS-Project UNIX System Administration / SOLARIS Voice over IP \u0026amp; Wide Area Networks Implementation of Documentation Platform (BIPo – BetriebsInformationsPortal)\nDesigned and implemented a central documentation platform for IKB Data based on PHP, IIS, and MSSQL. Integrated Microsoft Indexing Server to enable full‑text search across documents and attachments. Connected to the Oracle CMDB database to automatically detect server names in articles and link them to CMDB entries. Provided a unified knowledge base improving accessibility and operational efficiency across the organization Fernuniversität Hagen Student (January 2010 - December 2010) Knowledge-Based Systems Internet Security Management of Software Projects ","link":"https://www.findichgut.net/en/cv/","section":"","tags":null,"title":"Curriculum Vitae"},{"body":" Code Badge Issuer Link AZ-104 Microsoft Certified: Azure Administrator Associate Microsoft AZ-305 Microsoft Certified: Azure Solutions Architect Expert Microsoft AZ-500 Microsoft Certified: Azure Security Engineer Associate Microsoft CAPA Certified Argo Project Associate The Linux Foundation CBA Certified Backstage Associate The Linux Foundation CCA Cilium Certified Associate The Linux Foundation CGOA Certified GitOps Associate The Linux Foundation CKA Certified Kubernetes Administrator The Linux Foundation CKAD Certified Kubernetes Application Developer The Linux Foundation CKS Certified Kubernetes Security Specialist The Linux Foundation CNPA Certified Cloud Native Platform Engineering Associate The Linux Foundation Golden Kubestronaut Golden Kubestronaut The Linux Foundation ICA Istio Certified Associate The Linux Foundation KCA Kyverno Certified Associate The Linux Foundation KCNA Kubernetes and Cloud Native Associate The Linux Foundation KCSA Kubernetes and Cloud Native Security Associate The Linux Foundation Kubestronaut Kubestronaut The Linux Foundation LFCS Linux Foundation Certified Systems Administrator The Linux Foundation OTCA OpenTelemetry Certified Associate The Linux Foundation PCA Prometheus Certified Associate The Linux Foundation ","link":"https://www.findichgut.net/en/certifications/","section":"","tags":null,"title":"Certifications"},{"body":"This page is under construction. Please come back soon.\n","link":"https://www.findichgut.net/en/about/","section":"","tags":null,"title":"About"},{"body":"Dear readers, I made it! During Cyber Week, I decided to become a Golden Kubestronaut, and finally I made it! It's a proud feeling to achieve something that only a bit more than 300 people have achieved before.\nExam overview Code Name Type Passing score ICA Istio Certified Associate Performance-based 68 LFCS Linux Foundation Certified Sysadmin Performance-based 67 PCA Prometheus Certified Associate Multiple-Choice 75 CGOA Certified GitOps Associate Multiple-Choice 75 CAPA Certified Argo Project Associate Multiple-Choice 75 CNPA Cloud Native Platform Engineer Associate Multiple-Choice 75 CBA Certified Backstage Associate Multiple-Choice 75 KCA Kyverno Certified Associate Multiple-Choice 75 OTCA OpenTelemetry Certified Associate Multiple-Choice 75 CCA Cilium Certified Associate Multiple-Choice 75 You can read a lot of details about the proctored exams in my Kubestronaut blog post.\nTo become a Golden Kubestronaut, you must first hold the Kubestronaut title. Once you have it, you don’t need to take any further action to receive your perks.\nGolden Kubestronaut perks As a Golden Kubestronaut, you currently receive a backpack, a beanie, and a 60% discount on each new CNCF certification. In addition, you will receive one free ticket per year to a Kubernetes Community Days event and a 50% discount on KubeCon registration. Your Golden Kubestronaut status is lifelong. However, your Kubestronaut title may expire if you do not re-certify. As long as you maintain your Kubestronaut title, you will also enjoy a ThriveOne subscription.\nBut how did it go? A retrospective diary. February 9th, I already had a short peek into the Istio certification course at KodeKloud when I was preparing for the CKS exam. Now it was time to proceed with the preparations for the ICA exam. I rushed through the course at 2x video speed as usual, did all labs and mock exams, and felt prepared. And yet again, 24 hours waiting for the registration. For the Microsoft exams, I sometimes booked the exam basically right before I wanted to start, but the Linux Foundation requires 24h for some reason. It was already late when I put away the laptop. So the exam for the 11th it is. Time for another hands-on exam.\nFebruary 10th, I started to use the waiting time for the exam to start with the LFCS exam. Unlike the LPIC certifications, this is a hands-on exam. So you actually have to solve tasks on a machine. And if you don't want to learn iptables commands by heart, you need to know how to find them on the system. The only allowed resources for this exam are docs and man pages on the system or provided by any package of the Linux distribution. For me the LFCS preparation was quite easy, given the fact that I have been working on Linux machines for 25 years now. But at the same time it was challenging, as I hadn't run a manual useradd command for years. Usually I used Ansible for these kinds of tasks, and I realized how often I actually used Google instead of man pages. Anyway, most of the exam was just a refresh.\nFebruary 11th, I passed the ICA exam. Although I never worked with Istio in a production environment, it was quite easy to handle, thanks to the labs in the course and the documentation. One done, nine to go.\nFebruary 12th and 13th, watching course videos and exams, nothing out of the ordinary, just relaxed studying. Finishing up the sessions with the Killer.sh simulator. There's none for the ICA exam, but at least one version for the LFCS. Even though the KodeKloud courses have mock exams, the Killer.sh simulator is basically your final check. It's usually harder than the actual exam, the type of questions are quite close and you have to solve tasks in multiple different environments.\nFebruary 14th, day of the LFCS exam. Let's get this done! After I finished the exam, I basically started to proceed with the PCA exam preparations straight away. For a freelancer, you usually go with the client's needs, and a lot of projects request good knowledge of Prometheus these days. So doing the PCA exam was basically the logical consequence of this.\nI was still watching videos and doing labs when I realized that the CNPE exam was becoming a part of the Golden Kubestronaut program on March 1st. The Certified Cloud Native Platform Engineer is a tough performance-based exam. I wasn't worried about this in general, but there was no course on KodeKloud and hardly any information about it so far. So I decided to finish up the path before March 1st and to use the free ThriveOne subscription to prepare for the CNPE exam, which was already part of my purchased Kubestronaut to Golden Kubestronaut bundle.\nFebruary 15th, PASSED the LFCS with a straight 100%. PCA exam booked. Starting to prepare for the CGOA (Certified GitOps Associate) exam. 8 left.\nFebruary 16th, while I was waiting for the PCA exam, I was still working on finalizing my CGOA preparations. The CGOA exam is a multiple-choice exam and the KodeKloud course is pretty much focused on Argo with a lot of labs. With some experience in DevOps and a good understanding of what GitOps means, it's not too hard. Once I had finished the CGOA preparations, I started with the CAPA course. As mentioned, the CGOA course was pretty much focused on Argo anyway, so I was confident that this would be a good choice. After I finished the PCA exam, I proceeded and booked the CGOA exam. 30% golden.\nFebruary 17th, CGOA exam day. As for all exams, I had passed the mock exams with the course as well as used the exam simulators provided by Hidde van Griensven in his Github repository to gain some more confidence. As the exam simulator only provides basic questions without source code or images, I usually also did the practice exams on the CNCF Exams repository. Only if all results pointed to a good score did I book the exam. As I was really eager to push this through. Although I was already quite tired after several extra long days of preparation, I was already working on the CBA exam. The concepts of Backstage are quite nice, but... it's based on React and NodeJS. I didn't have much issues with the concepts, the YAML, the setup, but I already had a strange feeling. CGOA passed, CAPA booked. Let's see what Backstage has to offer... 4/10\nFebruary 18th, day of the CAPA exam. I was quite well prepared for the CAPA exam, and was almost done with the CBA preparations when something happened. The results of the KodeKloud mock exam were quite ok, not fantastic, but OK. Hivagr, not too good, but yet OK. I started with the CNCF exam simulator. Everything went fine in the beginning, but then the questions are sorted by category, and the customization part became quite rough for me as I am not a NodeJS or React developer. I had to take a decision: no Backstage exam on the next day. I took my time and checked some videos again, did some labs again, and then started to focus on the CNPA exam. Halfway through.\nFebruary 19th, as the CBA courses already included parts of the CNPA exam and I somehow thought it would be easier to have a break from specific products, I finished my CNPA exam preparations and booked the exam. Then I spent time trying to understand a bit better tools like yarn, the backstage CLI and how to implement plugins in Backstage. I was even close to taking a React course on Udemy. But at some point I switched over to start with the KCA preparations. By the end of the day, I told my wife that I would have to try the CBA exam, even if it might only show me my gaps.\nFebruary 20th, I started the day discussing React, MaterialUI, and Backstage with AIs, and did additional practice exams generated by them. Then I switched over to Kyverno, as the KCA exam was still outstanding. By the end of the day, I had finished the Kyverno preparations and felt \u0026quot;prepared\u0026quot; for the Backstage certification.\nAs usual, I stopped all activity one hour before the exam (CNPA) to clear my head. During the exam, I realized how tired I actually was. It took much longer than usual, and it wasn’t just because there were more questions. But I had a mission.\nNormally, I finish the exam, then go back to flagged questions and check all answers again. This time, I was too tired, but confident I had enough points. I only reviewed the flagged questions and submitted. More than sufficient.\nThat day, I also worried about running out of slots for the last exams, so I booked them for February 28th to secure a time. I knew I could reschedule if needed. 6/10\nFebruary 21st, day to defeat my nemesis. Well, it's actually not really like that. I like Backstage and the possibilities it offers. It was just the development part of this exam and the lack of time to actually learn the basics of React. So I had more discussions with AIs, got some cheat sheets for commands, and then I put everything away and finished my Kyverno preparations. The CBA exam itself? Even worse than expected. I flagged one question after the other. Every time I felt insecure, I gave an answer, flagged the question and moved on. Even when I knew the concept, the questions were sometimes irritating. I tried to stay calm and do my best. Interestingly, when I went back to the first flagged question, the answer was clear to me. Same with the next one. From the 18 or so questions I had flagged, most answers seemed correct to me. Don't ask me how I did it. I guess a mixture of good preparation and educated guessing, but I passed and not just somehow, it was even a good result.\nI had put the OpenTelemetry and Cilium preparations to the end of my journey, as the learning path on KodeKloud had shown that those courses have much longer preparation videos than all the other courses. So it was still a long way to go, but one exam less on the list. 70% golden!\nFebruary 22nd, besides the KCA exam, I was working on my OpenTelemetry preparations. It turned out that OpenTelemetry was much more theory than I had hoped. By the end of the day I was done with the OTCA preparations and already did some preparations for CCA, which turned out to be only half the training videos compared to what was stated in the learning path overview. I was almost done! Unfortunately I couldn't get a good time slot for the 23rd to do the OTCA exam. Only two more to go.\nFebruary 23rd, as mentioned earlier, I used to do several mock exams prior to booking the exam. Well… on the 25th of February my wife and I had our anniversary. Our 10th wedding day. I didn't really want to spend time thinking about exams on that day. So I took a decision. I was quite confident that I would be able to finish my exam preparations for the CCA. So I rescheduled the exams. OTCA was scheduled for 3:30 pm, which basically means that you can check in at 3 pm and if you're a bit lucky can start around 3:15 pm. Further I tried to get a slot for the CCA exam the same day. I frequently checked the schedule, but by the end of the day I could only find one for 8:30 pm, which is not my preferred time to take an exam, but I was happy with the decision. Indeed I managed to get through the rest of the CCA preparations that day, so I was close to the finish line.\nFebruary 24th, I started my day going through some OpenTelemetry topics, refreshing my memory. I mean hey, I basically had four different exams on my mind in between and I really wanted to avoid messing up this exam, as it would also have a psychological effect for the CCA later on. My wife was reading the news. It was the fourth anniversary of Russia’s full-scale attack on Ukraine, which had started on February 24th, 2022. Back then we stood in front of our door watching rockets and trying to realize what was actually going on. I, the one who is usually reading the news, stayed out of it. Not today! No bad juju, no bad karma, no bad vibes. I tried to do as much as needed and as little as possible on that day. At 4:36 pm I received my results… only one exam left. Time for a break. Time to check again my personal status. Time for the exam. At 9:35 pm it was official, I had passed the last of the 10 exams.\nWith an average score of 92, 13 days after the first exam, I was done. 15 days in total. Most of them quite too long. I was in the mood to celebrate the achievement, while my body was in the mood to sleep. After my mind won the rest of the day, it was time for my body to win. So I went to bed.\nWhat made this really interesting is that you can learn about a lot of different technologies and how things could actually work. As of now I have never worked in a project which really followed the GitOps approach all the way to production. The way to the Golden Kubestronaut is not only a way to deepen your knowledge in certain areas, but it can also take your mind on a journey to realize how IT can look in 2026. Even though it was really challenging, I would most definitely do it again and I am already looking forward to the CNPE exam!\nBeing 43 years old now, I can say that you're never too old to take on a challenge. Maybe age even helps sometimes.\nAll you need is a strong will, persistence, and curiosity… and yes, being a bit of a smarty helps too. ;)\n","link":"https://www.findichgut.net/en/posts/lets_make_it_golden/","section":"posts","tags":null,"title":"Let's make it golden - My Golden Kubestronaut challenge"},{"body":"Already at the end of October, I decided to take the CKA exam to underline my Kubernetes knowledge. When I started preparing for it, I received a phone call telling me that a potential customer might be interested in Azure skills. Because of this, I postponed my CKA plans and decided to take the Azure exams first, which helped me refresh some knowledge and document my expertise. After I was done with my Azure Certs, I took a short break and then started with my CKA preparations. It didn’t take long until I realized that I wanted to become a Kubestronaut.\nThe Kubestronaut title consists of five certifications:\nCode Name CKA Certified Kubernetes Administrator CKAD Certified Kubernetes Application Developer CKS Certified Kubernetes Security Specialist KCNA Kubernetes and Cloud Native Associate KCSA Kubernetes and Cloud Security Associate The first three are performance‑based exams, which means you have to solve tasks on actual Kubernetes clusters. The KCNA and KCSA exams are classic multiple‑choice exams.\nThe CNCF publishes the curriculum for all exams on GitHub.\nPerformance-based exams For the performance‑based exams, there is a list of documentation you are allowed to use. This usually includes all man pages and documentation available with the distribution (you can also install software on the system), as well as the Kubernetes documentation. You can see the allowed resources for all exams here.\nIn several tasks, there are links to specific documentation pages, which help you proceed quickly. They also provide a link to the allowed resources page.\nFor all performance‑based exams, it takes up to 24 hours after you finish the exam to receive the results. They will tell you the same for the multiple‑choice exams, but in general you get the results right after you finish.\nWhen I say you will receive the results within 24 hours, it basically means you can hope for 22 hours but should expect them to use every minute they have. Some reports even state that it took longer than 24 hours. I personally had all results within the time frame.\nDuring the exam When you enter the online exam, a proctor will say hello, remind you it’s your last chance to go to the toilet, ask if you are ready, might check your passport again, and then proceed with the environment check. Even if it’s your fifth exam, don’t try to provide all answers before they ask – they follow a strict procedure with the same text for all exams.\nyou might be allowed to go to the toilet; this decision is taken by the proctor, so you can ask politely and request a break you may NOT have other people or ANIMALS in the same room Please be aware that the rules are strict in order to maintain the reputation of the certification and to ensure an exam experience without interruptions. This is something every exam taker should be interested in, too.\nA full list of the rules can be found here.\nThe exams The CKA exam checks skills across most parts of Kubernetes. For me, the CKA exam was the most difficult one. As a non‑native speaker with decent English, it was still hard to understand all questions in detail. Furthermore, it was my first hands‑on exam. Luckily, I found this Udemy course, which has a lot of labs (tasks you solve in a test environment). You get free access to the lab environments with the course. This course actually introduced me to KodeKloud. KodeKloud is a platform you will hear about in several of my certification‑related blog posts. It offers high‑quality courses, often with many quizzes or labs. Even for certifications with multiple‑choice exams, they usually include labs to improve understanding. Listening to the courses at 2x speed, the trainers seem quite enthusiastic (only works at 2x speed for me).\nI personally believe that the CKA is the best exam to prove Kubernetes knowledge. Even if most people say that the CKS is harder, the CKA gives a firm overview of all areas of Kubernetes and is most useful for anyone who wants to work with a Kubernetes cluster. I almost ran out of time during the exam. I can only recommend using the “Quick Reference” box in the task description for fastest documentation access and using:\n1 kubectl create --dry-run=client -o yaml as well as:\n1 kubectl edit ... 2 kubectl replace --force -f /tmp/....yaml Even if the resource cannot be edited in the required way, it will provide a file in /tmp/ which you can use to replace the resources in question.\nFor the CKAD exam, I had already switched to a KodeKloud subscription. There is an entire Kubestronaut learning path, which includes the previously mentioned CKA course on the KodeKloud platform itself, as well as all other exams for the Kubestronaut certification. After the intense preparation for the CKA exam, this one was really easy. I think it took me less than an hour to finish all tasks, and then I checked everything in detail for another 30 minutes.\nFor the CKS exam, I was warned that it would be the hardest of the three. I personally don’t think so. The courses were a good match, and it was quite manageable. As in all exams, I sometimes wasn’t confident that my solution matched their expected results, but I solved all tasks within about an hour plus another 30 minutes for checking.\nThe previous exams prepare you really well for KCNA and KCSA. You just need a bit more theory on topics like Service Mesh, CNCF, Observability, and GitOps. I also played around with Copilot and ChatGPT to create some mock exams – just like I described in my Azure post.\nExam tip Please make sure, for each task, that you perform it on the correct system. In the real exam, you will have a separate cluster for each task. In my CKA exam, I accidentally tried to solve a task on the wrong environment, which wasn’t sized for the large task I attempted, and I lost points due to an unresponsive system.\nAnd most importantly: if you get stuck on a task, flag it and proceed! You can see all flagged tasks in a drop‑down menu and return to them later. Later tasks might be more time‑consuming or more complex. If you notice that you took the wrong turn and are not sure how to resolve the issue, move on and check again once you’re done with the rest. Otherwise, you might run out of time, which makes it harder or even impossible to solve the remaining tasks.\nDiscounts Several course providers offer discount codes for the exams. Furthermore, there are huge discounts during Cyber Monday and other occasions. If you already took the CKA (or CKAD), there are upgrade bundles at a lower price than taking the single exams. If you are close to the Cyber Monday sale (end of November, beginning of December), this offers the highest discount rates.\nKiller.sh For each exam, there is a killer.sh simulator. With each exam, you receive two sessions for the simulator for free. The CKAD only has one version, while the CKS and CKA simulators have two different versions available. You will have a timer for 120 minutes, which is exclusively for you to measure your own performance. You will have 36 hours of access to the test environment once a session has started. After the two hours, or if you stop the exam, you will also receive access to a very detailed solution and explanation of each task, while still being able to compare your results in the environment. During the 36 hours, you can also reset the environment and start over. Read the solutions carefully, as they contain valuable information on how to quickly solve tasks in a reliable way.\nHow long I prepared If you take several courses with the same training provider: many providers reuse the same videos across different courses. Personally, I usually avoided skipping those videos – simply to refresh the knowledge and keep it present. But if you want to move faster, you can of course skip them.\nIt took me a bit more than two weeks to prepare for all of the exams. KodeKloud states that it takes 4–5 months to prepare if you spend 6 hours per day. Others say it takes 40 days. I personally think it mainly depends on how much you want to achieve and what other duties you have during the day.\nWhat happens next Once you have completed the final exam, you will receive an e‑mail with further instructions within a week. You will be asked to fill out a form with your personal data. Usually within another week, your name will appear on the Kubestronaut page.\nAs of today, you will receive the following perks for becoming a Kubestronaut:\na Kubestronaut jacket (typically delivered within 4 months) access to a dedicated Kubestronaut Slack channel and mailing list five coupons for 50% off certifications each year 20% off three CNCF events per year And of course:\n","link":"https://www.findichgut.net/en/posts/kubestronaut/","section":"posts","tags":null,"title":"Kubestronaut"},{"body":"To be honest, I didn’t have a website in November 2025, but since I am mainly speaking about my experiences in November, I changed the date.\nThis also gives me the chance to provide a bit more detail on things I learned later on.\nAt the beginning of November, I decided to pursue some Azure certifications. Initially, I wanted to take AZ‑104 (Administrator). Then I realized I was more curious than that.\nMy next plan was to also take AZ‑305 (Solutions Architect) and AZ‑400 (DevOps). I soon realized that I had mistakenly associated Azure DevOps with a general DevOps certification rather than the product Azure DevOps. Later, I also learned that I am not a big fan of Azure DevOps. Although it has some benefits, it felt to me like just another toolchain without any real “wow effect.” Furthermore, it is designed specifically for Azure, which deepens vendor lock‑in. Last but not least, I quickly realized that the course I took on it was not good at all. So finally, I decided against taking the AZ‑400 certification.\nMicrosoft charges different prices depending on the country. While exams cost $165 in the United States and 126 Euros in Germany (around $150), they are only $58 in Ukraine. This makes them highly attractive for documenting and extending your knowledge — at least if you are not in the US or Western Europe ;)\nAnyway, I had my Udemy subscription and was highly motivated, so I took the first course.\nAZ‑104 I prepared with Udemy and followed the entire course, as usual, at 2x speed.\nIn general, John Christopher seems to be the go‑to instructor on Udemy when it comes to Azure courses. I also looked for more practice exams on Udemy and, of course, completed the practice assessments, which are usually linked on the Exam Overview page on learn.microsoft.com for all exams.\nSince I already had a lot of experience with Azure, this was enough to pass the AZ‑104 exam.\nAZ‑305 Again, I chose a course by John Christopher for my preparation on Udemy. Unfortunately, I really didn’t feel that this was sufficient.\nWhen should you use Azure SQL, when a managed instance, or should you deploy a self‑managed VM? Which product do you need for which amount of data, failover time, and so on? I had the strong impression that there are many small details to know, rather than just painting the big picture.\nI checked more practice exams on Udemy, but I wasn’t sure if all the answers were correct. Of course, I also took the official practice assessment Exam Overview provided by Microsoft.\nWith all those small details, I tried another approach: what if you ask an AI to create practice exams for you? You can even ask for more specific questions on certain subjects, such as differentiating between database types. And it worked!\nSpecifically, I asked Copilot to prepare some test exams. After noticing that some questions weren’t as good as I expected, I asked for more questions on specific topics. I also started discussing subjects with Copilot and reading further explanations. In some cases, I was certain Copilot was wrong, so I opened a tab with ChatGPT and asked the same question. Sometimes, it turned out that two AIs can have three different opinions on a subject — for example, what is required to migrate secrets from one Key Vault to another.\nIn any case, it also helps to ask them for documentation links and references to get detailed answers. One problem remained: AIs are trained to keep you entertained and happy! So you have to remind them frequently that you need exam‑level questions, that they should not ask easy ones, and if you notice the questions becoming easier again, you have to repeat the instruction.\nIf you want statistics, make sure you receive them regularly and take your own notes. At some point, you get the feeling that discussing with the AI doesn’t get you any further — either you agree, or the AI is wrong, or at least it’s hard to determine if it’s right. That’s the moment when you’re ready to take the exam.\nAZ‑500 During my preparation for AZ‑400 (which was actually the exam I wanted to take after AZ‑104), I decided it would make sense to gain more knowledge about security in Azure. I basically followed the same approach as for AZ‑305.\nFor me, this exam was easier than AZ‑305, as it was less focused on product‑specific details. So again, I used Udemy — fortunately with another course by John Christopher — plus the assessment test on Microsoft Learn, and a few chats with my AI companions.\nFurther recommendations By now, I know that KodeKloud also offers courses for all three (and more) of the mentioned Microsoft certifications. I haven’t tried them yet, but if I ever re‑certify, I would definitely give them a try, as they usually provide very good courses.\nFor those who don’t have a KodeKloud subscription and are also looking for AWS certifications, I can strongly recommend Stephane Maarek. For me personally, Udemy has become less interesting, as course quality varies and higher‑level material is not included in the subscription.\n","link":"https://www.findichgut.net/en/posts/azure-certs/","section":"posts","tags":null,"title":"Azure Certifications"},{"body":"","link":"https://www.findichgut.net/en/categories/","section":"categories","tags":null,"title":"Categories"},{"body":"","link":"https://www.findichgut.net/en/series/","section":"series","tags":null,"title":"Series"},{"body":"","link":"https://www.findichgut.net/en/skills/","section":"","tags":null,"title":"Skills"},{"body":"","link":"https://www.findichgut.net/en/tags/","section":"tags","tags":null,"title":"Tags"}]