Hello there! My name is Erik and I am a Canadian software developer in the discipline of visual computing, which encompasses image synthesis (rendering), image analysis, and computer vision. I have a Master’s in Visual Computing from Saarland University in Saarbrücken, Germany. I also have a Bachelor’s in Electrical Engineering with a Minor in Math from Carleton University in Ottawa, Canada.

Technology has always interested me and I love the immediacy with which one can receive feedback when developing software. This gives a much more tactile development process than building circuits or designing hardware. I particularly enjoy when building software requires the use of architectural concepts or mathematical reasoning. Really the more math required the better.

Experience

This is only a highlight of my experience, to see more I invite you to peruse my CV.

QNX Software Systems

Advanced Driver Assistance Systems (ADAS) are helping to save lives and making driving safer for everyone while we wait for mass market autonomous transportation solutions. QNX Software Systems develops a proprietary realtime operating system and range of middleware products for the embedded space, in particularly the automotive market. I worked on the Sensor Framework portion of QNX’s ADAS product, which provides a consistent API for efficiently accessing data from a range of IMU, GPS, radar, and lidar sensors, in addition to cameras. I also assisted with experimental projects in QNX’s Autonomous Vehicle Innovation Centre (AVIC) and was proud to have developed the rearview “mirror” display in our 2019 CES showing (embedded video) of a modified Karma Revero.

Max-Planck-Institut für Informatik

I am certain I will always remember 2020 as the year of Mildenhall et al’s NeRF (neural radiance fields), which built on previous implicit scene representation works to demonstrate a simple system for encoding a scene into neural network weights from images. I experimented with different ideas for advancing this exciting approach and finally settled on applying it to monocular reconstruction (extracting surface geometry from a single video of a general deforming object). An initial approach to this problem combined two previous works: NR-NeRF for handling temporal deformations and NeuS for accurate surface reconstruction. However, while theoretically capable of modeling any scene and sufficient for some scenes, significant monocular depth ambiguity challenges hinder learning the correct solution, particularly for scenes with large deformations (e.g. humans). This required the addition of a novel 3D scene flow loss. I wrote extensively about this in my thesis and a conference publication (pending). The project has a webpage and the code for this work is publicly available. Check out an example result below.

The input image is on the left. The reconstruction is shown uncolored from camera perspective to give the best sense of the geometry recovered. The colored reconstruction is also shown from a novel view point to demonstrate stability and 3D consistency. Note that colors are not strictly accurate and are for visualization only.

Fraunhofer IIS

With the support of the German Academic Exchange Service (DAAD) through their Research Internships in Science and Engineering (RISE), I spent a summer during my undergraduate studies in Erlangen at the Fraunhofer IIS assisting with research into blind source separation (BSS). Most notably, I made contributions to craffel’s mir_eval open source music information retrieval library to add Python support for framewise and stereo source separation quality metrics. I worked under the excellent supervision of faroit.

GasTOPS

Monitoring the health of critical machinery is of paramount importance to avoiding costly and dangerous equipment failures. The health of any oil lubricated machinery can be monitored using Oil Debris Monitoring (ODM) technology. GasTOPS’s MetalSCAN product line is a foremost solution for the monitoring of turbines and generators. I worked on design verification and environmental testing for MetalSCAN 4000, as well as on the Windows application used in production for interfacing with the device. Most significantly, I completed a standalone project called PeakSCAN, which controlled a function generator and oscilloscope to replace an expensive, aging spectrum analyzer.

Projects

Ray Tracer for CG1

As part of the Computer Graphics 1 course at Saarland University, a ray tracer was written during the semester. At the end of the semester, the ray tracer was used to render an artistic image and a website was produced to document the image production process and ray tracer features. This website can be accessed here.

Zebra Dodge

Zebra Dodge is a mobile game for iOS and Android that I completed with an artistic director as CloseCall Studios. The game follows Ace through zebra dodging adventures on a strange world with a sinister presence. You can have a look at the website for more information and to see some features and gameplay.



AkimBear

AkimBear is a game I created with my compadres for Ludum Dare 43. This game was very well received and it was fantastic to watch people playing it on streams and hear great feedback that will help us improve our designs in the future! We focused on using lessons from a talk by Vlambeer’s Jan Willem Nijman about game feel and integrating the theme (Sacrifices must be made) into the game design rather than the narrative. You can check it out at it’s itch page, but I hope you have a controller handy if you do!

MACRO

MACRO is a game I created with my companions for Ottawa Game Jam #4 hosted by You.i TV. In MACRO you play as Mac Robinson in a trippy adventure through the world of the very small (it’s a platformer). All animations are created by painstakingly rotoscoping a miniature and we focused on building the game’s atmosphere. You can check it out at it’s itch page.

MotherDucker

MotherDucker is a game I created with my comrades for Ludum Dare 41. This was our second game with roughly the same group. We managed to put together a unique mother-ducking experience with all original models, audio, and controls. I worked on programming the gameplay and integrating components in Unity. We released this game on itch.io and you can check it out at it’s itch page!

Thought Experiment

Thought Experiment is a game I created with my confidants for Global Game Jam 2018. This was the first game any of us had completed and we were very proud of what we managed to throw together in just 48 hours. We produced the large majority of our 3D models (a fireaxe proved a bit too much) and all audio ourselves. I worked on the integration of assets in Unity and programmed all aspects of the gameplay. We released the game and you can check it out at it’s itch page!

First-In Respose Evaluation (FIRE) System

This was the fourth year project for my undergrad, for which I got to work with an excellent team on a project I feel strongly about. After witnessing the Grenfell Tower fire, we saw UAVs used after the fire and this got us questioning the technical feasibility and value to firefighters of using UAVs during active fires. We developed a proof of concept platform for evaluating this robotics application using UAV, sensor, and communication technology.

I worked on autopilot selection and the software architecture for UAV control. I also looked into the potential application of photogrammetry to enable a novel data visualization. If you are interested in knowing more, you can read my final report.

CognoSynth

CognoSynth was a project completed for the first CUHacking hackathon with a couple friends. We used an Emotiv Epoc+ headset to measure the EEG signals of the user and produce tones based on the strengths of different brain wave frequencies. The accompanying app was developed for Android using Android Studio. A machine learning algorithm was developed using Weka (a research based Java library) to classify the cognitive load and further modify the generated sound to better reflect the user’s mental state. The sound was generated by playing multiple harmonics of a base sine wave that was scaled using the power of the measured EEG.

I implemented the sound generation using the output of the machine learning classifier and individual brain wave frequency band strengths. If you are interested in knowing more, our devpost page has more information. We won first place for this hack!