April 22, 2026

Solution for Advent of Code 2018 - Day 22: Mode Maze

Link to the puzzle text

Part 1

In this puzzle, we have a way of creating a 2d grid of cells describing a cavern. Depending on a input value (the current depth) and the coordinates a cells belongs to one of 3 differenty types of terrain. The borders of the caverns, where either x or y are 0, the terrain type depends directly on the coordinate and depth values. For a special target show, the terrain type is a fixed value. For other cells, the terrain depends on the cells directly left and above the current cell. Finally the values are taken modulo 3 for the terrain type between 0 and 2. We should get the terrain type of all cells starting from (0,0) to a given target cell and sum up the values of the terrain types.

We create a 2d numpy array for the terrain type. Going line by line starting a the top, we use the given method for determining the terrain type:

def calc_geo_types(target_cell):
erosion_level = np.zeros((target_cell[0] + 1, target_cell[1] + 1))
for y in range(len(erosion_level)):
for x in range(len(erosion_level[0])):
if (x, y) == target_cell:
cell_geo_index = 0
elif y == 0:
cell_geo_index = x * 16807
elif x == 0:
cell_geo_index = y * 48271
else:
cell_geo_index = erosion_level[y - 1, x] * erosion_level[y, x - 1]
erosion_level[y, x] = (cell_geo_index + depth) % 20183
geo_type = erosion_level % 3
return geo_type 

Summing up the grid returns the answer. 

Part 2

In part 2, we should use the determined 2 grid as a map for determining the fastest path from (0,0) to the target cell. The length is calculated from both movement costs and equipement change costs. Movements between cells is only possible for certain equiped items, so sometimes changing equipments is necessary for a faster total time. There are three different types of equipment, each allowing 2 terrains each and disabling movement to the third terrain type.

We use a A* search for finding the quickest path through the grid. In addition to the current coordinates, we track the current equipment. When determining the next possible movements, we filter the cells where movement is currently forbidden due to the current equipment. We also add the action of staying at the current position and changing equipment. Whenever we reach the target cell equipped with the target equipment, we can stop the search and return the answer.

Originally we only searched for the quickest path in the grid created in part 1, but some quicker paths may lead outside. We therefore expanded the grid from part 1 whenever we move to the right or lower border of the grid.

Link to my solutions

April 16, 2026

Solution for Codyssi 2025 - Lotus Scramble

Link to the puzzle text

Part 1

In this puzzle, we have a long string consisting of multiple characters of lowercase, uppercase and symbol characters. Non-symbols characters are considered not corrupted and we should count the number of non corrupted characters in out input.

We used the map function for transform every character in our input into a truth value. The truth values are summed up, since Python will quietly transform True to and False to 0.

sum(map(only_uncorrupted, input)) 

The function for checking if a character is corrupted is taking the built-in functions for checking for lower- and uppercase:

def only_uncorrupted(c):
return c.islower() or c.isupper()

Part 2

In part 2, each non-corrupted character is given a value by its position in the alphabet. Lowercase character start with 1 for a until 26 for z, while uppercase character are in the range 27 - 52. We should sum up the values for the non-corrupted characters only. Corrupted characters we should ignore.

We again use the map function, but with a function for the character values instead. The values we get with the help of the ord function, which gets us the ASCII value of a character. We then take the ASCII value of a and A to get the position of the character in the alphabet:

def char_values(c):
if c.islower():
return ord(c) - ord("a") + 1
if c.isupper():
return ord(c) - ord("A") + 27
return 0

Part 3

In part 3, we now get a way of calculating the character value of symbol characters. The formula depends on the character value of the previous character. We should again sum up the character values for everything in our input.

Since we now depend on the previous value, we instead keep a list of all character values until now. In case of lower and upper characters, we use the previous method and append the value to our list. For symbols, we use the last value of the list and apply the formula. At the end we sum up our list for the answer.

Link to my solutions

April 06, 2026

Conformal Analysis Tutorial

Introduction to conformal analysis

In conformal analysis we are adding a confidence level to prediction methods even when the original method does not allow it. This is useful whenever we want to add a number to each prediction to quantify the uncertainty in the result. A necessary change for this is not predicting a single class, but a set of possible classes. So a neural network might predict a set of possible classes instead of just a single class. It might even predict a set of classes, when we are not certain enough for a single answer. For each answer we can be confident the answer is correctly in this set for a given confidence level. For example a confidence level of 0.95 mean in 95% of cases the correct answer will be in the answer set while a confidence level of 0.99 means in 99% of cases the correct class will be in the answer set. 
There is a trade-off between the desired confidence and the desired size of the answer set.
Higher desired confidence levels lead to larger answer sets. 

We are showing how to add conformal prediction to an existing neural network, so the focus in this tutorial is not on how neural networks work.

Just a quick recap on classification using neural networks:

When predicting a class using a neural network, we use some input data and run it through multiple layers, transforming the input at each step. The layers differ in the functions used and the size of the input/output. The last layer is usually contains a value for each label. In normal prediction the label with the highest value is used as the final answer.


These scores are just intermediate values, not probabilities, so they cannot be used directly to get a quantitative confidence value. In conformal analysis, we are adding another step here. When we want to get an answer set, we instead of using the highest value at the last layer, we instead compare every class score with some threshold and select every class higher than this threshold. This threshold value depends on the desired confidence level and needs to be calculated in advance using the conformal analysis.


This tutorial shows a simple version turning a neural network based prediction into a confidence score.

The complete tutorial can be found at https://github.com/nocicadaleftbehind/ConformalAnalysisTutorial

Build neural network based prediction

This tutorial focuses on the conformal prediction part, so for training we just used the standard toy problem: MNIST. The MNIST dataset consists of grayscale images of digits and focuses in classifying those images into the single digit shown. For this tutorial, we based the neural network implementation on a PyTorch example script (https://github.com/pytorch/examples).

Here we train a neural network to classify an image into one of 10 classes. The last layer of this network has a log probability score we are using as basis for our conformal prediction.

To change the existing implementation for the training part, we need just one change. Instead of splitting the dataset into just training and test, we also have an additional calibration split. In our example, we used an 80-10-10 split, so 80% of the dataset are used for training, 10% for testing and 10% for the new calibration step. The calibration split is used in the next step, so we need to save the indices for this later use.

The complete script can be found as 01_training.py

Converting to confidence

The next step using the trained neural network to get the thresholds needed for the prediction. This step is new and specific to conformal prediction, so we need to write it ourselves based on the existing example implementation. 

We use the calibration split set aside in the first step. Since these images were not used in the training, they are independent and can be used for the calibration. We take the already trained network and run the prediction on every element of the dataset split. For every input datapoint, we are only interested in the predicted scores for every class. The output of the network is the logarithmic propability for each class. This means the value is between minus infinity as the lowest to 0 for the highest. While this score can be used directly, for convention we are converting it to a (non-logarithmic) alpha score. Taking the exponential of the network output gets us a probability between 0 for lowest and 1 for highest. The alpha value is just 1 - value, so a value of 0 is now the best possible score. After this short calculation we are saving both the predicted value of the correct class and the correct label.

Finally, we take all the alpha values for each label, sort them and save them in a separate file for the last step.

This complete script can be found as 02_calibration.py

Predicting using conformal analysis

The last step is now using the trained network and the calculated alpha values to predict classes from a new input. The prediction itself is similar to a normal neural network. The main changes are an additional parameter (the desired confidence). For conformal prediction the output is now a set of labels instead of a single class. This can also mean we are not predicting any labels, if no prediction has enough confidence for our given confidence threshold.

First we are using the sorted scores from the previous step. Based on the desired confidence alpha, we are calculating the offset 1 - alpha into the sorted scores. So for a desired confidence of 95%, we are using a threshold of the highest 5% of observed scores.


For the prediction, we run the network as usual to obtain the scores for each class. Instead of just choosing the single class with the highest score, we now select all labels based on this threshold.

In a variant, we can also use an individual threshold for each class. Here the calculation means we group the scores from the previous step according to the class. The threshold calculation is the same, but taking just the scores for each class separately. The per-class thresholds are used when the classifier is unbalanced in their ability to predict the class. We also need enough data for each class, so the treshhold can be established.

This complete script can be found as 03_prediction.py
The implementation contains both a global and a class-based implementation of the thresholding.