• Spacing Out Elves for Advent of Code 2022 – Day 23

    Or Dance of the Sugar Plum Fairies?

    Day 23 of Advent of Code had us simulate a group of elves spacing themselves out to plant fruit. This one is a little slapdash, but it’s still fun to look at. The ground and view could have been nicer, but the holidays always limit what I can attempt in a reasonable amount of time.

  • Path Finding for Advent of Code 2022 – Day 12

    Day 12 of Advent of Code is a path finding problem, which is ripe for visualization. I used the A* algorithm for my solution, which should get to the closest path as fast as possible. It’s always crazy to watch these algorithms in action as they search and narrow down the solution near the end.

    Searching for a path. Running up that hill.

    Lessons Learned

    This is the first visualization with a large amount of nodes. For just the terrain, there are 5,904 nodes. The renderer uses one giant buffer for storing constants and allows a max of 3 renders in flight at a time. This means I can only use 1/3 of the buffer per render pass. In my original implementation, I was breaking past the 3MB buffer, which at best causes artifacts, and at worst causes slow downs and lock ups. To fix this, I added:

    1. The ability to specify the buffer size at initialization.
    2. A check after a render pass to fatally crash the app if more than the maximum buffer allowance is used.
  • Reading a Display for Advent of Code 2022 – Day 10

    Day 10 of Advent of Code had us determine which pixels should be enabled on a broken display. These pixels make a string that is the final input. Some times challenges like this can be interesting to look at, because the puzzle has the display go through a series of iterations before the string comes to form. This puzzle was much more straight forward.

    Lessons Learned

    My renderer never had a means to update the perspective matrix, leaving me stuck with a near / far ratio of 0.01 to 1000 and a field of view of 60ΒΊ. I added updatePerspective to allow modification of these values at any time.

    I noticed in my previous visualization that memory usage was extremely high. I didn’t think too much of this until this visualization also consumed a lot of memory for no real reason. This is a simple render comparatively. The last time this happened, I was bitten by CVMetalTextureCache taking a reference to the output texture as a parameter:

    CVMetalTextureCacheCreateTextureFromImage(
        kCFAllocatorDefault, 
        textureCache, 
        pixelBuffer, 
        nil, 
        .bgra8Unorm, 
        CVPixelBufferGetWidth(pixelBuffer), 
        CVPixelBufferGetHeight(pixelBuffer), 
        0, 
        ¤tMetalTexture
    )

    In the above code, the function takes a reference to currentMetalTexture as output. This would cause Swift to never release any previous value in currentMetalTexture, effectively leaking every texture made. Assigning nil to currentMetalTexture was the fix in that case.

    But this was not the issue. It felt like another texture leak, because the size was growing quickly with every frame. A look at the memory graph debug should 100,000+ allocations in Metal, so I was on the right track.

    Metal, out of control

    Most of the objects still in memory are piles of descriptors and other bookkeeping objects, but they were all stuck inside of autorelease pools. Since the rendering function is just one long async function, anything created inside of an autorelease pool in the function will never get released until the function eventually ends. Wrapping the function in an autoreleasepool closure solved the issue and brought memory consumption on both this visualization and the previous one under control.

  • Stacking Crates for Advent of Code 2022, Day 5

    Day 5 of Advent of Code revolves around a crane moving crates around to different stacks. This was a great opportunity to try my new 3D renderer for generating visualizations.

    Over an hour of crate stacking goodness!

    What Was Missed?

    This was the first attempt at using the renderer, so a proper implementation was going to expose what features I didn’t know I needed.

    Animation is a bit weird if you don’t have easing functions. I implemented a small set of functions on the 3D context, so that I can ease in and ease out animations as the crates go up, over, and down.

    Rendering text was an easy implementation when using CoreGraphics and CoreText, but for 3D renderers, it gets more complex. I built a createTexture function that generates a CoreGraphics context of a given size, uses the given closure to let you draw as you need, and then converts that to a texture that is stored in the texture registry. There is a bit of overlap here with the 2D renderer, but for now, the utilities exist as copies between the two implementations.

    Ooops!

    There are a couple of rough edges if you manage to watch through the whole 1 hour video. I try not to rewrite too much of my original solution when I’m creating the visualized variant. I typically add the structures and logic from the initial project and slightly adapt it to work across both the console and visualized versions. Because of that, I’m typically stuck with weird state. If you watch the crates go up and over, they use the height of the tallest stack, even if that stack isn’t traversed. Crates travel further than they need to.

    Also, because the movement is generated from a state when the moving crates are removed and not placed in their destination, you’ll some times see crates travel down through their own stack and move across at the wrong height. I’ll chalk that up to a quirk and leave it.

  • Advent of Code In 3D!

    In my previous post, I detailed how I combined CoreGraphics, AVFoundation, and Metal to quickly view and generate visualizations for Advent of Code. With this new set up, I wondered, could I do the image generation part completely in Metal? I have been following tutorials from Warren Moore (and his Medium page), The Cherno, and LearnOpenGL for a while, so I took this opportunity to test out my new found skills.

    If you’d like to follow along, the majority of the code is in the Solution3DContext.swift file of my Advent of Code 2022 repository.

    Subtle Differences

    When using CoreGraphics, I had a checkin and submit architecture:

    • Get a CoreGraphics context with nextContext()
    • Draw to this context using CoreGraphics APIs.
    • Submit the context with submit(context:pixelBuffer)

    With 3D rendering, you typically generate a scene, tweak settings on the scene, and submit rendering passes to do the work for you.

    Before rendering, meshes and textures need preloaded. For this I created the following:

    • loadMesh provides a means to load models files from the local bundle.
    • loadBoxMesh creates a mesh of a box with given dimensions in the x, y, & z directions.
    • loadPlaneMesh creates a plane with the given dimensions in the x, y, & z direction.
    • loadSphereMesh create a sphere with a given radius in the x, y, & z direction.

    The renderer uses a rough implementation of Physically Based Rendering. Each mesh is therefore composed of information about base color, metallic, roughness, normals, emissiveness, and ambient occlusion. The methods above exist in two forms: one that takes raw values and one that takes textures.

    With the meshes available above, a simplistic node system is used to define objects in the scene. Each node has a transformation matrix and points to a mesh and materials. The materials are copied at initialization, so a mesh can be created with some defaults, but then modified later.

    With a scene in place, the process of generating images becomes:

    • Modify existing node transformations and materials.
    • Use snapshot to render the scene to an offscreen texture and then submit it to our visible renderer and encoding system.

    If I wanted to render a scene of spheres of different material types, I can use the following:

    try loadSphereMesh(name: "Red Sphere", baseColor: SIMD3<Float>(1.0, 0.0, 0.0), ambientOcclusion: 1.0)
    
    let lightIntensity = SIMD3<Float>(1, 1, 1)
    
    addDirectLight(name: "Light 0", lookAt: SIMD3<Float>(0, 0, 0.0), from: SIMD3<Float>(-10.0,  10.0, 10.0), up: SIMD3<Float>(0, 1, 0), color: lightIntensity)
    addDirectLight(name: "Light 1", lookAt: SIMD3<Float>(0, 0, 0.0), from: SIMD3<Float>( 10.0,  10.0, 10.0), up: SIMD3<Float>(0, 1, 0), color: lightIntensity)
    addDirectLight(name: "Light 2", lookAt: SIMD3<Float>(0, 0, 0.0), from: SIMD3<Float>(-10.0, -10.0, 10.0), up: SIMD3<Float>(0, 1, 0), color: lightIntensity)
    addDirectLight(name: "Light 3", lookAt: SIMD3<Float>(0, 0, 0.0), from: SIMD3<Float>( 10.0, -10.0, 10.0), up: SIMD3<Float>(0, 1, 0), color: lightIntensity)
    
    updateCamera(eye: SIMD3<Float>(0, 0, 5), lookAt: SIMD3<Float>(0, 0, 0), up: SIMD3<Float>(0, 1, 0))
    
    let numberOfRows: Float = 7.0
    let numberOfColumns: Float = 7.0
    let spacing: Float = 0.6
    let scale: Float = 0.4
    
    for row in 0 ..< Int(numberOfRows) {
        for column in 0 ..< Int(numberOfColumns) {
            let index = (row * 7) + column
            
            let name = "Sphere \(index)"
            let metallic = 1.0 - (Float(row) / numberOfRows)
            let roughness = min(max(Float(column) / numberOfColumns, 0.05), 1.0)
            
            let translation = SIMD3<Float>(
                (spacing * Float(column)) - (spacing * (numberOfColumns - 1.0)) / 2.0,
                (spacing * Float(row)) - (spacing * (numberOfRows - 1.0)) / 2.0,
                0.0
            )
            
            let transform = simd_float4x4(translate: translation) * simd_float4x4(scale: SIMD3<Float>(scale, scale, scale))
            
            addNode(name: name, mesh: "Red Sphere")
            updateNode(name: name, transform: transform, metallicFactor: metallic, roughnessFactor: roughness)
        }
    }
    
    for index in 0 ..< 2000 {
        let time = Float(index) / Float(frameRate)
        
        for row in 0 ..< Int(numberOfRows) {
            for column in 0 ..< Int(numberOfColumns) {
                let index = (row * 7) + column
                
                let name = "Sphere \(index)"
                
                let translation = SIMD3<Float>(
                    (spacing * Float(column)) - (spacing * (numberOfColumns - 1.0)) / 2.0,
                    (spacing * Float(row)) - (spacing * (numberOfRows - 1.0)) / 2.0,
                    0.0
                )
                
                let transform = simd_float4x4(rotateAbout: SIMD3<Float>(0, 1, 0), byAngle: sin(time) * 0.8) *
                    simd_float4x4(translate: translation) *
                    simd_float4x4(scale: SIMD3<Float>(scale, scale, scale))
                
                updateNode(name: name, transform: transform)
            }
        }
        
        try snapshot()
    }
    

    Or, I can go a bit crazy with raw objects, models, and lights:

    Additional Notes

    To make the encoding and muxing pipeline work, you must vend a CVPixelBuffer from AVFoundation to later submit it back. Apple provides CVMetalTextureCache as a great mechanism to create a Metal texture that points to the same IOSurface as a pixel buffer, making the rendering target nearly free to create.

    Rendering pipelines tend to use semaphores to ensure that only a specific amount of frames are in-flight and don’t reuse resources that are being modified. This code uses Swift Concurrency, which requires that forward progress must always be made, which goes against a semaphore that may hang indefinitely. Xcode is complaining about this for Swift 6.0, but I’ll cross that bridge once I get there.

    Model I/O is both amazing and infuriating. It can universally read models like OBJ and USDZ files, but what you discover is that everyone makes their models a little bit differently. As noted above, each material aspect could come from a texture, or from a float value, or from float vector. Even though you get the translation for free, the interpretation of the the results can turn in to a large pile of code.

  • Advent of Code Visualizer Redux

    For the past couple of years, I’ve done my Advent of Code submissions in Swift, and used a custom pipeline of CoreGraphics, Metal, and AVFoundation to streamline the creation of visualizations. This worked great, but the solution to do this felt a little hacky. I’ve now rewritten this pipeline to follow modern practices and be more streamlined.

    If you want to follow along, my new code is available on GitHub.

    The Old Way

    The basic process of generating the visualizations is:

    1. Run the Advent of Code solution until we’ve reached the point of creating a frame.
    2. Get a CVPixelBuffer from the AVFoundation API that’s appropriate for encoding.
    3. Create a CoreGraphics context pointing to the CVPixelBuffer memory.
    4. Draw the frame.
    5. Simultaneously:
      • Submit the CVPixelBuffer to the Metal renderer.
      • Submit the CVPixelBuffer to AVFoundation for encoding and mixing.

    When I originally set up the code, SwiftUI was brand new, it was limited as an API, and my experience in it was next to none. A rough layout of the code was:

    1. A Metal view with a closure that does the “work”. This closure passed an “animator” object as its only parameter.
    2. During construction, the Metal view creates the “animator”, which builds all of the AVFoundation contexts needed for encoding and muxing the animation.
    3. Once the Metal view appears, it calls the “work” closure, which starts the Advent of Code solution.
    4. At the point of an animation frame, the “work” closure calls a draw method on the “animator”.
    5. This draw method takes a closure which passes a CGContext as its only parameter. The draw closure is where the frame drawing should occur.
      • Before the closure is called, a CVPixelBuffer is grabbed from the AVFoundation pixel buffer pool and a CGContext is created using the memory from the CVPixelBuffer.
      • After the closure is called, the CVPixelBuffer is submitted to the encoding and muxing parts of AVFoundation.
    6. The CVPixelBuffer is also stored in a @Published variable of the “animator”. The Metal view observes this variable and uses that as a means to render the pixel buffer on the next render pass.

    Make sense? It shouldn’t. That’s way too many closures, a confusing ownership model, and a nearly incomprehensible code path.

    The New Way

    I’ve learned a lot since SwiftUI was released. SwiftUI has also changed. There has to be a better way!

    The first step was to contain everything inside of one ObservableObject. At creation, this object builds the Metal rendering context and the AVFoundation contexts. To get new drawing contexts, a nextContext method returns both a new CVPixelBuffer and CGContext. When drawing is complete, both objects are passed back to a submit method, which then does the cleaning up and vending to Metal and AVFoundation.

    All of this is done in a SolutionContext object. Any visualization just subclasses this object and overrides the run method, calling nextContext and submit as needed.

    If I wanted a solution that just pulsed a color on the screen, I could write:

    class VisualizationTestingContext: SolutionContext {
        
        override var name: String {
            "Visualization Testing"
        }
        
        override func run() async throws {
            for t in stride(from: 0.0, through: 100.0, by: 0.01) {
                let (context, pixelBuffer) = try nextContext()
                
                let redColor = CGColor(red: 1.0 * alphaValue, green: 0.0, blue: 
                let backgroundRect = CGRect(
                    x: 0, y: 0, 
                    width: context.width, height: context.height
                )
                
                context.setFillColor(redColor)
                context.fill(backgroundRect)
    
                submit(context: context, pixelBuffer: pixelBuffer)
            }
        }
    }

    The entire application code to run this becomes:

    struct VisualizationTestingApp: App {
        
        @StateObject var context: SolutionContext = VisualizationTestingContext(width: 800, height: 800, frameRate: 60.0)
        
        var body: some Scene {
            
            WindowGroup {
                SolutionView()
                    .environmentObject(context)
                    .navigationTitle(context.name)
            }
        }
    }
    A slightly more complex drawing example

    With just that bit of code, you can have a fully rendering, encoding, and muxing system. No more closures, no more spaghetti, and no more rendering to JPEGs and then stitching them together with FFmpeg.

    Bonus Round!

    Since I’m already rewriting everything, let’s go a couple steps further.

    Most visualizations boil down to filling in rectangles or drawing text. Instead of doing this by hand every time, I built a handful of functions to do the bounds measurements, origin coordinate conversions, and CoreGraphics object conversions for me.

    // Draw a mushroom in box
    let grayColor = CGColor(red: 0.5 * alphaValue, green: 0.5 * alphaValue, blue: 0.5 * alphaValue, alpha: 1.0)
    let textColor = CGColor(red: 1.0, green: 1.0, blue: 1.0, alpha: 1.0)
    let box = CGRect(x: 0.0, y: 0.0, width: 100.0, height: 100.0)
    let font = NativeFont.boldSystemFont(ofSize: 12.0)
    
    fill(rect: box, color: grayColor, in: context)
    draw(text: "πŸ„", color: textColor, font: font, rect: box, in: context)

    Some AppKit and UIKit APIs are nearly identical, so when I need universal access to fonts and colors, I can now just use my Native* versions of them:

    #if os(macOS)
    import AppKit
    
    public typealias NativeColor = NSColor
    public typealias NativeFont = NSFont
    
    #else
    import UIKit
    
    public typealias NativeColor = UIColor
    public typealias NativeFont = UIFont
    
    #endif

    And with that said, all of the code is now universal, meaning it can be run on macOS, iOS, or iPadOS. There isn’t a huge benefit to this, but since the APIs are so close, and everything else is SwiftUI, why not?

    Note that the iOS simulator is way slower than running natively on device. Any slow down in the code is typically from waiting for AVFoundation to be ready for writing the next frame, which the simulator is most likely not optimized for high speed streaming of data.

  • Formula Control 2.2 Released

    Shortly after releasing Score Card 3.0, Apple sent me the dreaded “you’re app will be delisted” message for Formula Control. There’s nothing really different with the app, outside of it being recompiled.

  • Score Card 3.0 Released

    Score Card 3.0 has been released! Version 3.0 contains a series of changes to make the app more fun and convenient to run. The major changes include:

    Themes! Pick a theme that best suits your style. There are dark themes, light themes, low contract themes, colorful themes, and more.

    Sharing! Share the results of your score card via PDF or image. You can send out official results to everyone that you played with, or you can brag on social media about your recent win.

    Score Board! When the app is shared via AirPlay or hooked up to a monitor, a score board version of your score card is displayed for everyone to see.

    Game Names! Score cards can be named for categorizing and discovering your past games.

    In addition to the major changes above, the entire app has been rewritten in SwiftUI and now requires iOS 15 or iPadOS 15 at a minimum.

  • Long Lost Advent of Code Visualizations

    As is tradition with Advent of Code, I make visualizations of some of the solutions. Nowadays, I do it in a fancy way using Apple’s APIs to draw, present, and encode the visualization for me.

    Prior to that, I used to solve the problems with Ruby. I would output a series of images and then later encode them using FFmpeg to make my videos. How inefficient!

    It also turns out, Google is yet again closing off a free service it got everyone hooked on. I’ve been reorganizing and re-uploading YouTube videos and found some of the old videos mentioned above.

    Fixing a broken display

    Moving data in a mainframe

    Defragging a disk
  • Advent of Code 2021

    Update: I’ve rewritten the visualization and documented the process.

    Every year, I attempt to complete the Advent of Code. It’s a series of programming challenges that gives me an opportunity excise my coding ability in new and unique ways.

    I completed this year in the same manor I’ve used for the past few years: Swift command line applications with some visualizations done through CoreGraphics, AVFoundation, Metal, and SwiftUI. The solutions can be found on my GitHub page.

    My Animator and RenderableWorkView stayed roughly the same from last year, but I did discover a terrible memory leak in the Metal renderer. C calls like CVMetalTextureCacheCreateTextureFromImage write to pointers, and in Swift, the Automatic Reference Counting misses that overwrite, causing every single texture generated to leak. For proper accounting, you must assign the texture to nil first, to ensure the previous texture gets cleaned up. Leaks like that are hard to find because Advent of Code challenges are sometimes designed to consume tons of memory if you aren’t paying attention.

    Bingo with an octopus
    Flashing dumbo octopuses
    Folding transparent notes
    Finding the least risky path
  • Score Card 2.4 Released

    Score Card Icon

    Score Card 2.4 has been released. This version makes game headers more useful by displaying a player’s full name if there is room. When room is too tight, either their first three letters are shown, or their first initial is shown.

    It also make text bigger on iPadOS for better utilization of the screen.

    Get the latest version on the App Store.

  • Advent of Code 2020

    Update: I’ve rewritten the visualization and documented the process.

    Every year, I attempt to complete the Advent of Code. It’s a series of programming challenges that gives me an opportunity excise my coding ability in new and unique ways.

    I was unable to complete this year. Life sometimes just gets in the way. I did most of my work in Swift on the command line. The solutions can be found on my GitHub page. I rewrote my animation code to also exist as a live preview window. This allowed me to watch my solutions in real time, as opposed to waiting for the result to be muxed and written to disk.

    The Animator still relies on CoreGraphics and AVFoundation for rendering, encoding, and muxing. A RenderableWorkView has been added, using SwiftUI and Metal, to allow for the previews.

    Optimal seating
  • Restructure 2.1.0 Released

    Restructure 2.1.0 is a minor feature release. sqliteVersion has been added to inspect the version of SQLite that is being used. JournalMode.off has been removed as it is no longer safe nor supported.

    This release also adds Dynamic Member Lookup to the Row class, allowing the use of property-like accessors. For example:

    /// Old Method
    let value: Int = row["someValue"]
    
    /// New Method
    let value: Int = row.someValue
    
  • Score Card 2.3 Released

    Score Card Icon

    Score Card 2.3 has been released. This version beings trackpad and keyboard support in iPad OS, as well as improvements to Dark Mode.

    It also has an improved credits screen, and a new “What’s New” page, telling you exactly what you are reading here!

  • Advent of Code 2019

    Update: I’ve rewritten the visualization and documented the process.

    Every year, I attempt to complete the Advent of Code. It’s a series of programming challenges that gives me an opportunity excise my coding ability in new and unique ways.

    This year, I used Swift Package Manager and command line apps to solve each day. The solutions can be found on my GitHub page.

    As per tradition with Advent of Code, I’ve visualized some of my solutions. I used a combination of CoreGraphics and AVFoundation to render, encode, and mux the results. All of this is contained within my Animator class.

    Painting the hull of a space ship
    The effects of gravity on Jupiter’s moons
    Playing Breakout with a custom IntCode language
    A repair droid exploring
    Traversing a multi-dimensional maze
  • Score Card 2.2 Released

    Score Card Icon

    Score Card 2.2 has been released, bringing support for iOS 13 and Dark Mode. The app has received an overall polish, and the player selection screen has been cleaned up to be intuitive. Rows can be selected to be cleared or deleted.

  • Formula Control 2.1 Release

    Formula Control Icon

    Formula Control 2.1 has been released, with support for iOS 13 and Dark Mode. Under the hood, it has transitioned to using Swift Package Manager for it its third-party dependencies.

  • Restructure 2.0.0 Released

    In preparation for iOS 13, tvOS 13, & macOS 10.15, I’ve released Restructure 2.0.0. The major change for this release is the move to Swift Package Manager for distribution. All of the Apple ecosystem is moving to SPM, and so goes Restructure. No more submodules!

    This release also adds some additions mentioned in the WWDC 2019 session Optimizing Storage in Your App. Journal and vacuum modes can now be modified to better manage your storage usage. Secure deletion has is also configurable.

  • Score Card 2.1 Released

    My Score Card app had been neglected for a while, and since I was already in the mode of updating iOS apps, I gave this app the same curtesy. As with my other projects, I’ve updated the app to Swift 5 and Restructure. Along with that, the app should look better on modern iOS devices.

    As an added bonus, I added a custom keyboard, since the stock keyboards on iOS never presented the exact right combination of keys to make score entry easy. Along with that keyboard, the iPad version gets simple physical keyboard support for quickly editing scores.

    Editing scores on iPhone

    I also moved the app from free to $0.99. This is more of an experiment, since the app already has a good amount of users and there is zero advertising done to it.

    Check it out on the App Store.

  • Announcing: Restructure 1.0.0

    In the process of writing Formula Control, I decided it was time to rethink my SQLite wrapper, Structure. I started writing my original library when Swift 1.0 was announced. It was migrated through the big language transitions of Swift and was starting to show its age. The framework was also my first attempt at writing a Swift library and a SQLite wrapper, so I didn’t know what I needed and which features were overkill.

    And so Restructure was born. The new framework simplifies the API I had created before, hiding relationships between statement and database, and removing internal queueing that was never necessary. It adopts many more datatypes, and makes it easier to work with more complex datatypes like arrays and dates.

    Along with a clean up, Restructure also adopts more modern features of Swift. Statements are also Sequences, so now results can be iterated, mapped, reduced, or anything else a Sequence can do. Statements are Encodable and Rows are Decodable, making transitions between database and data structure seamless.

    Check it out on GitHub. There are examples and unit tests to learn form.