MattDiephouse

I'm passionate about writing great software. Find me on GitHub and Twitter as mdiep.

Most Recent

  1. » Type-Driven Development with Swift
  2. » Logic Programming in Swift
  3. » Taking on Technical Debt

All Articles

Type-Driven Development with Swift

23 May 2017

I recently started reading Type-Driven Development with Idris, by Edwin Brady. It’s been enlightening to learn about Idris’s dependent types; I highly recommend it. I've found the book to be very approachable.

In Type-Driven Development, as presented in the book, you focus on the types of your functions before implementing them. After the types compile, you go back and write the actual code. This gives you a proven foundation for your code. (Proven because types are proofs.)

Idris does this with holes, which fill in for an expression that you haven't written yet.

module Main

main : IO ()
main = putStrLn ?greeting

?greeting is a stand-in for the value that’s going to be printed. Idris will report the type of the hole. In this case, ?greeting is a String.

Swift doesn't have holes (maybe it should??), but I’ve found myself using a very similar technique with fatalError. I can figure out my types and then fill in the implementations. This is particularly useful when I’m unsure how to represent a concept in the type system.

enum Result<T, E: Error> {
    case success(T)
    case error(E)

    func andThen<U>(_ action: (T) -> Result<U, E>) -> Result<U, E> {
      // I can work out the types before I write this
      fatalError()
    }
}

let x = Result  // Oops! `T` and `E` can't be inferred.
    .success(3) // I can figure this out before implementing `andThen`.
    .andThen { value in
        return .success("foo")
    }

This works at the top level of your program. But it can also work as you implement a given function:

extension Result {
    func andThen<U>(_ action: (T) -> Result<U, E>) -> Result<U, E> {
        switch self {
        case let .success(value):
            return action(value)
        case .failure:
            // I can come back to this after finishing the `.success` case
            fatalError()
        }
    }
}

This has been a very handy tool for me in the past month. Type inference can be hard to predict, and this lets me work through an API without spending the time implement it.

Logic Programming in Swift

21 Dec 2016

Today I open-sourced Logician, a Swift logic programming library. Since I suspect most people don’t know much about logic programming, I wanted to explain a bit about logic programming, what it’s good for, and how it works.

Why Logic Programming?

Most programs tell a computer exactly what to do, step by step:

Set sum to 0; set i to 0; check that i is less than array.count; add array[i] to sum; repeat.

This is imperative programming.

Declarative programming tells the computer what something is without describing all the individual steps:

sum is the reduction of array that starts with 0 and adds each element to the total.

Logic programming is a form of declarative programming that describes the solution instead of the calculation. Based on that description, the computer solves the problem. Describing a solution to a problem can be a far easier than devising a method to solve it, which is why logic programming can be appealing.

Example Applications

I find most logic programming examples to be uninteresting and uninspiring. Usually they mention artificial intelligence and involve some sort of fact database. Since I can’t imagine ever needing to solve a problem like that, I had never been very interested in logic programming.

But there are other problems that can be solved with logic programming. Here are a few examples that I find more interesting and the constraints that describe the solution:

  • N Queens (Logician implementation)

    1. Each queen is on a different row
    2. Each queen is on a different column
    3. Each queen is on a different forward diagonal
    4. Each queen is on a different backward diagonal
  • Sudoku (Logician implementation)

    1. All numbers are 1 through 9
    2. Each row has exactly one of each number
    3. Each column has exactly one of each number
    4. Each subgrid has exactly one of each number
  • Map Coloring

    1. Each region is one of n colors
    2. Each region is different from all of its neighbors
  • Dependency Resolver

    1. Each project is one of a set number of versions
    2. Each dependency can have one of a set of pre-defined constraints

Most of us don’t solve these sorts of problems in our day to day work, but these problems do need to be solved occasionally. And while I can envision solving them with logic programming, solving them imperatively or functionally would be quite a bit of work.

How Logic Programming Works

In logic programming, constraints establish relationships between variables (unknowns) and values (knowns). To solve a problem, you express the constraints and ask the solver for valid solutions.

This is not unlike algebra: x * 2 + y * 2 == 16 can be thought of as a constraint. Solving the problem establishes a value for all variables. In this case, valid solutions include (x = 1, y = 7), (x = 2, y = 6), etc.

Constraints are stored in a data structure that we'll call State. Finding solutions requires two different operations on the state:

  • Reification: Resolving the non-variable value of a variable

    If a == b and b == 7, the reified value of a is 7, even though it might be stored as a == b. If a == b is the only constraint, then a and b have no reified value.

  • Unification: Combining two sets of constraints, erroring if they're inconsistent

    a == 5 and b == 6 unify, but a == 5 and a == 7 are inconsistent—a can't have two different values. The state signals that a constraint has been violated by throwing or returning an error.

There are a number of ways that you can implement unification and reification. One simple way is to use a dictionary of variables to variables/values.

/// An unknown value used in a logic problem.
/// The identity of the object serves as the identity of the variable.
class Variable: Hashable {
    static func ==(lhs: Variable, rhs: Variable) -> Bool {
        return lhs === rhs
    }

    var hashValue: Int {
        return ObjectIdentifier(self).hashValue
    }
}

class State {
    /// An assignment of a value/variable to a variable. Since the
    /// value can be either a variable or a value, an enum is used.
    enum Term {
        case variable(Variable)
        case value(Int)
    }

    /// The assignments of variables/values to variables. This is just
    /// one possible approach.
    var substitutions: [Variable: Term]

    /// Look up the actual value of a variable--following intermediate
    /// variables until the actual value is found.
    func reify(_ variable: Variable) -> Int? {
        switch substitutions[variable] {
            case let .variable(variable)?:
                return reify(variable)
            case let .value(value)?:
                return value
            case .none:
                return nil
        }
    }

    /// Unify two variables, erroring if they have inconsistent values.
    mutating func unify(_ a: Variable, _ b: Variable) throws {
        if let a = reify(a), let b = reify(b), a != b {
            throw Inconsistent
        }
        substitutions[a] = Term.variable(b)
    }

    /// Unify a variable and a value, erroring if the variable has an
    /// existing, inconsistent value.
    mutating func unify(_ a: Variable, _ b: Int) throws {
        if let a = reify(a), a != b {
            throw Inconsistent
        }
        substitutions[a] = Term.value(b)
    }
}

A solver uses unification and reification to find solutions to logic problems. This is a complex subject, with room for lots of optimization. But a simple approach, modeled by μKanren, is to represent constraints with goals—blocks that build an infinite lazy stream.

/// Something that generates values and can be iterated over.
class Generator<Value>: IteratorProtocol {
    func next() -> Value?
}

/// A block that takes a state and produces a generator (stream) of
/// possible states.
typealias Goal = (State) -> Generator<State>

If a goal returns an empty stream, it yields no solutions; if it returns a stream with multiple values, there are multiple viable solutions.

Creating a goal is easy.

func goal(_ block: @escaping (State) throws -> (State)) -> Goal {
    return { state in
        do {
            let state = try block(state)
            return Generator(values: [ state ])
        } catch {
            return Generator()
        }
    }
}

func == (variable: Variable, value: Int) -> Goal {
    // A goal that will return either 0 or 1 values.
    return goal { try $0.unifying(variable, value) }
}

Goals handle the unification, but we still need to reify the solutions. To do so, we use a solve function that introduces a new variable, solves for a goal, and then reifies that variable.

extension Generator {
    /// Apply a function to every value created by the generator,
    /// resulting in a new generator that returns the the new
    /// non-`nil` values.
    func flatMap<New>(_ transform: (Value) -> New?) -> Generator<New>
}

func solve<Value>(_ block: (Variable) -> Goal) -> Generator<Int> {
    let variable = Variable()
    let goal = block(variable)
    let initial = State()
    return goal(initial).flatMap { return $0.reify(variable) }
}

That's all it takes. (But you probably want a few more constraints—at least && and ||.) This is a simplified version of Logician's current solver, but it highlights the core concepts.

Logician has many more features, including different value types (not just Ints), inequality constraints, mapping over variables, constraints on properties of a variable, etc. It’s still very naïve; but I hope that it can develop into a sophisticated logic solver.

Read More

If you’d like to learn more about logic programming, the Logician README has a Learn More section with links to some great resources.

Taking on Technical Debt

23 Jun 2014

The Metaphor

In this metaphor, doing things the quick and dirty way sets us up with a technical debt, which is similar to a financial debt. Like a financial debt, the technical debt incurs interest payments, which come in the form of the extra effort that we have to do in future development.
—Martin Fowler

When programmers talk about technical debt, it's overwhelmingly negative. We talk about accruing technical debt and paying down technical debt. This reflects the reality of software development: most debt accrues unintentionally as bugs are fixed and requirements change.

But there's more to it than that. To continue the metaphor, we talk about the interest from technical debt, but not the capital. We focus entirely on the long-term downsides of unintentional technical debt without considering the short-term opportunities that intentional debt can provide.

Why Technical Debt

The metaphor also explains why it may be sensible to do the quick and dirty approach. Just as a business incurs some debt to take advantage of a market opportunity developers may incur technical debt to hit an important deadline.
—Martin Fowler

Martin Fowler suggests that technical debt can be used to hit deadlines, but I think that's an incomplete picture.

Software development is fundamentally a process of change: either from nothing to something, or from one thing to another. Developing iteratively and incrementally helps manage this process. But often the real difficulty is knowing (1) what the end product should be and (2) how to transform what you have into what you want.

Building Knowledge

If you don't know what to implement, you lack knowledge about the system, problem space, or algorithm. Programming is often the most efficient way to learn; trying to solve a problem will introduce you to all its pieces and challenges.

But I often resist building a solution because I don't know how to build a good solution. I need to be okay with building something that's flawed. Instead of worrying about the good solution, I need to worry about my understanding of the problem.

The difference between that initial version and the one I'm happy with is technical debt that I've taken on. It has let me learn what I need to build a better solution. This leaves me with some debt to pay down, but iterating tends to be faster than designing everything upfront.

Technical debt lets you build your knowledge to arrive at a good solution.

Decomposing Change

As the size of a change increases, the effort required to complete and test it increases dramatically. Build failures and broken pieces multiply the work required to arrive at a working state.

Breaking down a change into smaller pieces helps you develop efficiently. But even within a well-designed system, changes don't always decompose well. New requirements can span multiple components, or even the whole system, leaving you unable break down the change into discrete, well-designed pieces.

By focusing on one part of the system, and adding cruft to the other parts where necessary, you can make small, functional changes. Faking or working around an interface insulates the rest of system, but leaves you with technical debt to clean up.

Technical debt lets you break down work into manageable pieces.

The Power of Capital

Just as a monetary loan can help businesses accelerate their plans, technical debt can speed up the development of new features and software. I believe this is true on both the small and large scale, within a pull request or across a release. It's important to write good code, but don't be afraid to take on technical debt to help you get there.

I've often felt stuck because I didn't know how to write something well. Rather than getting stuck trying to write great code initially, I should be happy to start with bad code and make it into great code. Make it work, make it right, and then make it fast.