Logo Anna-Lena Popkes
  • Home
  • About
  • Skills
  • Experiences
  • Projects
  • More
    Education Recent Posts Talks & Podcasts
  • Posts
  • Dark Theme
    Light Theme Dark Theme System Theme
Logo Inverted Logo
  • Posts
  • Personal news
  • My path to machine learning
  • Books
    • Personal reading List
    • Deep work
  • Machine Learning
    • Bayesian linear regression
    • KL Divergence
    • Principal component analysis (PCA)
    • Support vector machines
    • Variational Inference
  • Python
    • Coding with kids
    • Mocking
    • Packaging tools
    • Magical Universe
      • Start
      • The Tales of Castle Kilmere
      • Object-oriented programming
      • Types of methods
      • Type annotations
      • To-string conversion
      • Decorators
      • Properties
      • Underscore patterns
      • Extending the universe
      • Duck Typing
      • Namedtuples
      • Abstract Base Classes
      • Data classes
      • Immutable data classes
      • Decorators in classes
      • if __name__ == "__main__"
      • Context managers
      • Testing with pytest
      • Iterators
      • Multisets
      • Extending the universe II
      • Exception classes
      • functools.wraps
      • Defaultdict
      • Config files
      • Wrap up
  • Software Engineering
    • Intro to containers
    • Intro to Docker
    • Intro to virtual machines
Hero Image
Principal component analysis (PCA)

After a longer break I continued working on my machine learning basics repository which implements fundamental machine learning algorithms in plain Python. This time, I took a detailed look at principal component analysis (PCA). The blog post below contains the same content as the original notebook. You can run the notebook directly in your Browser using Binder. 1. What is PCA? In simple terms, principal component analysis (PCA) is a technique to perform dimensionality reduction.

Thursday, August 11, 2022 Read
Hero Image
Support vector machines

I posted another notebook in my machine learning basics repository. This time, I took a detailed look at support vector machines. The blog post below contains the same content as the original notebook. You can run the notebook directly in your Browser using Binder. 1. What are support vector machines? Support vector machines (short: SVMs) are supervised machine learning models. They are the most prominent member of the class of kernel methods.

Tuesday, April 13, 2021 Read
Hero Image
Bayesian linear regression

I finally found time to continue working on my machine learning basics repository which implements fundamental machine learning algorithms in plain Python. Especially, I took a detailed look at Bayesian linear regression. The blog post below contains the same content as the original notebook. You can run the notebook directly in your Browser using Binder. 1. What is Bayesian linear regression (BLR)? Bayesian linear regression is the Bayesian interpretation of linear regression.

Saturday, February 20, 2021 Read
Hero Image
Variational Inference

Introduction Variational inference is an important topic that is widely used in machine learning. For example, it’s the basis for variational autoencoders. Also Bayesian learning often makes use variational of inference. To understand what variational inference is, how it works and why it’s useful we will go through each point step by step. What are latent variables? A latent variable is the opposite of an observed variable. This means that a latent variable is not directly observed but inferred from other variables which are observed.

Saturday, February 23, 2019 Read
Hero Image
Kullback-Leibler Divergence

One of the points on my long ‘stuff-you-have-to-look-at’ list is the Kullback-Leibler divergence. I finally took the time to take a detailed look at this topic. Definition The KL-divergence is a measure of how similar (or different) two probablity distributions are. When having a discrete probability distribution $P$ and another probability distribution $Q$ the KL-divergence for a set of points $X$ is defined as: $$D_{KL}(P ,|| ,Q) = \sum_{x \in X} P(x) \log \big( \frac{P(x)}{Q(x)} \big)$$

Saturday, February 2, 2019 Read
Navigation
  • About
  • Skills
  • Experiences
  • Education
  • Projects
  • Talks & Podcasts
  • Recent Posts
Contact me:
  • popkes@gmx.net
  • zotroneneis

Toha Theme Logo Toha
© 2020-2021 Copyright.
Powered by Hugo Logo