Eng Blog

Learn Haskell in two weeks

Exercise-driven and feedback-rich learning for fast, effective skill acquisition

Mitchell Vitez is on the Engineering Training team at Mercury.

March 5, 2026

Our new engineering hires learn Haskell via a focused two-week format we call “Learn Haskell by Exercises”, or “LHbE” for short. LHbE is entirely exercise-driven and as feedback-rich as we could possibly make it. Our Engineering Training team believes active practice with plentiful feedback is the best way to gain engineering skill.

In the past six months, we’ve run LHbE with more than 50 learners. We run the same program with everyone, from interns to managers to senior engineers. Incoming backend and full-stack engineers are automatically enrolled.

We don’t require new hires to have any Haskell knowledge. In fact, someone once finished LHbE without knowing what a Double was on their first day. If a new hire already knows some Haskell, they get an expedited version of the program to fill in any gaps we’ve spotted — we don’t want to waste anyone’s time. We’ve learned how to check existing Haskell skill with a placement test that takes under 10 minutes.

We don’t use a book, lectures, or any kind of source material — learning comes from the exercises. Every learner meets one-on-one with a mentor every day. We encourage mentors to avoid answering questions directly in order to help learners think actively. Learners routinely reach the topic of monad transformer stacks within 10 business days.

Here we’ll go over the current LHbE program, including what topics it covers, why we think active practice is great pedagogy, and how we make that practice effective using layered feedback hierarchies. We welcome your questions, thoughts, and other discussion points — email us at [email protected].

Topics covered

Haskell is a language with a lot of surface area, and two weeks is pretty short, so you might be wondering what exactly we manage to cover. Our primary goal is to get new hires skilled enough that they can proficiently complete web development tasks in our backend codebase. This requires substantial Haskell language knowledge, but we’ve found learners can absorb a huge amount of material when given continuous detailed feedback.

We place a lot of our teaching focus on thinking process (especially reasoning with types) and tooling (like Hoogle and GHCi). We want new hires to feel confident approaching the long tail of other Haskell knowledge they’ll eventually need, using tools they’ve gotten familiar with in their first two weeks.

LHbE is organized into 10 sets of exercises, and new hires get 10 business days to complete them. While those numbers provide an easy way for a new hire to assess whether they’re on track, we encourage learners to move as quickly as they want without sacrificing learning quality. For example, one of our interns recently finished LHbE in just eight days.

There’s plenty more content we can fit into the last few days as time allows — LHbE is the start of a learning journey, not the end. Some of our favorite bonus content includes diving deeper into Yesod (our web framework) and Esqueleto (our SQL query EDSL).

Per-set listing

Here is a breakdown of what we cover in each set:

Set 1

  • Hoogle
  • GHCi
  • Viewing the typechecker as a source of useful feedback
  • Type signatures
  • Typed holes
  • map and the basics of higher-order functions
  • Eta reduction
  • Backticks, infix operators, operator sections

Set 2

  • Writing whole Haskell programs
  • Reasoning with types
  • Inline type annotations
  • Basics of do notation (<- vs. let)
  • Using Maybe to represent failures as values
  • Creating a new route in our backend codebase

Set 3

  • Algebraic datatypes
  • case, pattern matching, guards
  • IO
  • deciphering pure ()
  • writing a basic JSON Handler

Set 4

  • Controlling imports and exports
  • Typeclasses and their operations
  • Redundant constraints
  • Reading class definitions
  • instance and deriving
  • Show, Eq, the Num hierarchy, Ord, etc.
  • Writing unit and integration tests in our backend codebase

Set 5

  • language extensions
  • record syntax
  • record accessor functions
  • RecordWildCards
  • OverloadedRecordDot
  • ($) and (.)
  • How to add a database model in our backend codebase

Set 6

  • using Either to represent failures with extra error information
  • Ambiguous types
  • Type applications
  • Semigroup and Monoid
  • OverloadedStrings
  • Cardinality, totality, parsing, round tripping

Set 7

  • Foldable
  • Difference between foldl, foldr, foldl', foldr'
  • Non-strict evaluation and thunks
  • Infinite data structures
  • Basic kinds (Constraint and Type)
  • Functor
  • fmap / <$>

Set 8

  • Applicative
  • <*>
  • The f <$> a <*> b <*> c pattern
  • Parse, don’t validate
  • Designing types to model the real world

Set 9

  • Monad
  • >>= and >>
  • Desugaring do notation
  • Functor, Applicative, Monad typeclass hierarchy recap
  • IO, [], Maybe, and Either as monads
  • Reader, Writer, State

Set 10

  • monad transformers
  • MaybeT, ExceptT, ReaderT, WriterT, StateT, RWST
  • Using MaybeT and ExceptT to collapse nested case expressions
  • Designing programs using monad transformer stacks

Thoughts on pedagogy

You wouldn’t learn how to play the piano solely by reading books. You need to actually sit down at the keys, practice, run into trouble spots, make mistakes, and adjust.

Conversely, you wouldn’t learn piano experimentally, by pressing keys to discover what works. Instead, you’d follow a structured program, find a teacher who can give high-quality feedback, pick level-appropriate pieces to work through, and so on.

Our Engineering Training team is all-in on the power of guided exercises. Educational research consistently finds that actively practicing outperforms passively consuming. This insight shaped LHbE’s structure: lots of hands-on-keyboard exercises, with minimal video instruction, reading, or other passive input.

Against prose

While we’ve always focused on putting exercises first, over time we’ve grown quite skeptical of using extended prose at all.

We used to give learners a book to read along with as they were doing the exercises. There was a failure mode where learners would repeatedly re-read chapters of the book and stay stuck not understanding. Once we encouraged them to make direct progress on the exercises, we got much better results. Eventually, we felt comfortable removing the book entirely, and outcomes improved after doing so.

For similar reasons, we don’t do lectures or provide any other extended passive learning material. Instead, we have produced a few 90-second YouTube videos, and written short introductions with precise language for every exercise module. Only a few of these run over 200 words — at most two minutes of reading.

Instead of prose, we like providing an initial “worked example”, where we’ll do the first exercise in a module for the learner. These demonstrations are often accompanied by notes in code comments. This avoids ineffective “discovery learning” by directly showing the learner what target they’re trying to hit.

After the worked example, the remaining exercises run through various aspects of a single topic. The early exercises are often extremely similar to the worked example. Like levels in a puzzle video game, we then gradually introduce new twists on the main idea. Ideally, a module’s exercises cover the full range of potential learning for a concept, including common sticking points or subtle nuances. Seeing 20 or 30 concrete examples helps learners refine their mental models, and provides a natural learning curve by introducing one tweak at a time.

It’s a popular idea that books are the best way to learn. Many educators spend a lot of time crafting explanations. We think that time is better spent aiming learners’ cognition directly at exercise. Practice over prose.

Useful struggle

Learning Haskell in two weeks is challenging. Some amount of struggle is useful and necessary to learn so much so quickly. However, there’s a lot of room between productively pushing the boundaries and being plainly frustrated.

One form of useful struggle is to notice the problems that arise when doing something the wrong way. We’ve found it’s important to ground these problems in a context as close to genuine real-world problems as possible. When an exercise environment feels fake, it’s easy to make up excuses like “nobody would ever write it this way for real”. Once a learner has seen the real pain that can arise from a problem, e.g. a badly designed type, they’re much more likely to be able to avoid that pain effectively in the future.

On the mentorship side, it can be tempting to help a learner come up with an answer, rather than silently watch them struggle. This steals their learning. That struggle gives learners so much quick exposure to different type errors, problem-solving techniques, etc. that it’d be a real shame if we were to take it away.

However, if a problem is affecting a learner emotionally, that’s a place where a mentor should step in and provide direct guidance. Not everyone comes in with the same level of resilience to challenge, and while we can probably help learners build some of that in a couple weeks, our focus is best placed elsewhere.

Linear progression

Every exercise is ordered. For example, in Set 1 we have modules prefixed with A_, B_, C_, and in each module we have exercises labeled a, b, c, etc. This means learners can do all the exercises in order, without thinking about what to do next — just move to the next one in the list. Possibly we could have organized around topic dependencies (like a skill tree in a video game), but so far have found the simplicity of a single path outweighs the benefits we’d get from freedom of choice.

We’ve found that learners who skip exercises or jump around are often engaging in avoidant “safety behavior”. We advise facing hard problems head-on. If something is too difficult, the best thing to do is to break it into subproblems.

We find it’s helpful to make progress visible, especially in a git diff. We’ll always add a markdown checkbox (- [ ]) to tasks, or a typed hole (_) where learners can visibly fill in the blank. A motto for the overall organization of LHbE might be: “one single flow, checked off as you go”.

Mental models

Programmers are always operating with incomplete knowledge. A modern CPU executes billions of instructions per second. If you were handed a log of everything your CPU did in the last second, it would take years to read, let alone understand.

In this pervasive partial-knowledge environment, the quality of your mental models can make a huge difference. We try to help learners produce new mental models for Haskell based on their own prior knowledge, so learners can draw on existing analogies. We expect learners to continue refining these models as they encounter more Haskell code on the job.

It’s important that learners come up with their own analogies and mental models. Handing them an analogy you personally like rarely works well. There are too many ways any given analogy fails to match up against the underlying reality. Instead, give learners many concrete examples, and let them come up with the abstraction themselves.

One-on-one mentorship

Each learner is assigned a mentor to guide them in one-on-one sessions. Mentor behavior can have a huge impact on how much and how efficiently someone learns. It’s hard to be perfect here, so we have our Haskell mentors constantly review each other’s work, and aren’t shy about sharing ways they could improve.

The worst mentor behavior is “stealing learning”. This is incredibly common, even among mentors who think they aren’t doing it. It’s like going to a gym and helping someone else lift their weights, or picking up a friend’s controller in a game and beating a boss for them. They’ll be farther than before by objective metrics, but won’t have any of the learning (or satisfaction) that comes with actually accomplishing a task. We shadow new mentors in every session for their first several learners, and this is always one of the biggest points of feedback.

A mentor also needs to be able to direct conversation without being controlling. When dealing with an exercise module, there are often several different points we want to ensure a learner has seen and thought about deeply. We let learners open with their own questions — sometimes those questions lead right into the major points we want to make sure they’ve understood. However, it’s the mentor’s responsibility to make sure nothing important gets missed.

Below are some questions we encourage mentors to ask while on mentorship calls. Note that none of these are about Haskell, really. They’re all about methods of getting better information. Being on a call is the closest we’ll get to seeing a learner’s inner thought process, so we should take advantage of that to ask about thoughts directly, reserving our coding style comments for PR review.

  • What’s the type of x?
  • How can we prove that?
  • How would we find that out?
  • Where can we look to see that?
  • What examples have we seen of that?
  • How can we become more sure?

It’s important to have strong empathy for learners — they’re doing something really hard. We’re especially careful when training new Haskell mentors, but use the same general approach as when training anyone — abundant feedback to go along with active, hands-on, deliberate practice.

Layered feedback hierarchies

We strongly believe the key to getting learners to learn so much so quickly is to provide an absolutely immense amount of high-quality feedback. Learners are great at making adjustments based on direct feedback, but that feedback needs to be both quick and correct for this to work.

LHbE provides feedback in multiple ways, using a layered approach. There are a few especially important “hierarchies” of feedback involved. The “real-time feedback hierarchy” applies when a learner is actively working through exercises — their goal is to figure out an answer and learn something new as efficiently as possible. The “checking hierarchy” provides multiple layers of feedback around correctness and writing good code. Finally, “metacognitive feedback” applies to the general structure of learning, and is where an experienced Haskell mentor will place most of their focus.

We want our feedback environment to be as richly saturated as possible, and are constantly thinking about how to tighten our learners’ feedback loops.

Real-time feedback hierarchy

Learners get real-time feedback while working on exercises. This hierarchy is organized by time scale — each additional level of the hierarchy is slower, but can provide a broader scope of correct answers than the previous level.

Immediately

We strongly encourage turning on a file-watching typechecker like ghciwatch before writing any code, because this is the quickest, lowest-level feedback loop available. We explicitly tell learners to set up their windows so they see compiler output immediately upon changing a file. If a learner has to alt-tab, their feedback loop is not immediate, and that’s too long.

Failure
Success
Forgetting to turn on the typechecker or test runner, and missing out on a valuable feedback stream.
Turning on the typechecker as a matter of habit, before doing anything else, and having its output constantly visible.

Less than 15 seconds

The quickest feedback tends to come from type error messages. We encourage the use of Haskell language features like typed holes and type annotations to get information quickly. Especially as learners advance, towards the end of week one and throughout week two, they’ll need to demonstrate clear thinking enabled by reasoning through how the types involved fit together.

Failure
Success
Only seeing red/green and not extracting all the available information from an error message. While type error messages can sometimes be confusing, often we see learners looking at them as a one-bit signal that something is wrong, rather than as useful information and pinpointing what specifically might need changing.
Performing step-by-step clear-minded reasoning about how the types fit together, and speeding up that mental process over time. The best learners will use this in combination with tools like typed holes, to prove to themselves that their quick-and-dirty reasoning holds up.

Less than one minute

Of course, sometimes learners run into a type they’re not familiar with. The very first thing we teach on day one isn’t any Haskell feature, or even how to run the code. It’s Hoogle.

We’ve repeatedly seen that strong Hooglers on days 2 and 3 tend to outperform on days 9 and 10 when things are really tough. In the long run, an engineer great at Hoogling to find information effectively is way more valuable than an engineer who has a lot of Haskell knowledge, but gets stuck on the unknown, unsure what to try next. There will absolutely be unknown constructs in real-world engineering problems.

The best learners will gain enough comfort with Hoogling type signatures to start piecing together Hoogle queries from multiple partially-known types. Hoogling is a great way to make partial progress on understanding a problem.

Failure
Success
“Staring at the code” when encountering something unknown. Reaching for slower methods first, like asking an LLM, trawling through Google search results, or reading Stack Overflow.
Being quick to Hoogle something unknown, especially when this leads directly to partial progress on a subproblem. Hoogling for types rather than function names.

Less than five minutes

Hoogle can’t answer “why is it…” or “what should I…” questions particularly well. At this point, we encourage learners to ask an LLM or maybe search Google/Stack Overflow. We’re trading speed of result for breadth of query.

It’s important to note that LLMs are a double-edged sword. Both some of our fastest and some of our slowest learners have leaned heavily on LLMs. The fast ones used LLMs to double-check their work to see if they could have done anything better, or to explore related topics and generally dive in a little deeper to understand things more fully. The slow ones used LLMs like a crutch to get to an answer, without getting to understanding. We give these learners feedback to help eliminate that behavior.

Failure
Success
Spoiling learning by having an LLM do the work. Lack of automaticity on the basics induces a lot of friction later when moving towards higher skill levels.
Using LLMs as another layer of feedback mechanisms after all tests pass, to check code for style, efficiency, and other concerns that aren’t as easy to test.

Less than 15 minutes

The last step is to message a Haskell mentor. Our mentors can’t promise response times as good as an LLM, but can provide an even wider range of correct answers, since they’re so familiar with the material in LHbE.

Failure
Success
Messaging a mentor as a first instinct, rather than after a solid attempt and some information gathering. Mentors are almost always slower to respond than any of the methods above. Alternatively, staying stuck for an hour (or more) instead of getting unstuck with a simple message.
Quickly eliminating other possibilities via faster levels of the real-time hierarchy, and reaching out with a well-targeted message that gets you unstuck and back to practicing more. Making progress on other things in the meantime.

Checking hierarchy

There are a few main layers of “check your work” feedback built into our LHbE system. These tools overlap with the ones that provide real-time feedback, but often require a different approach when the goal is to end up with final code that’s as clean and correct as possible.

Typechecker

Haskell is known for its unusually strong type system, so “does it compile?” is an unusually good first approximation for “is it correct?”. At any rate, if the code doesn’t compile, then it certainly is not correct.

Tests

Types can’t catch every issue. We use tests as another source of near-instant correctness checker, that learners can run on their own local machines. This is usually done in the form of a main function that learners aren’t allowed to change, with an “eval comment” (-- $> main) that ghciwatch will evaluate. There are even type-level tests for a few modules, where we want to ensure learners are getting the types correct in a context where multiple options would typecheck, but not all options are valid answers. Occasionally we’ll see code submitted that typechecks but isn’t correct, and running tests can catch this in an objective way without requiring any manual input from a Haskell mentor.

LLMs

While they have mixed results providing real-time help, LLMs shine as an easy “anything else?” check after an exercise module has been completed. However, getting this to work as an automated step consistently up to our standards hasn’t happened yet, so this is currently optional and learner-driven. Incorrect LLM code review can be quite frustrating, since a key component of useful feedback is that it’s consistently correct enough to feel trustworthy. (This is another good reason to lean heavily on the typechecker — it’s more consistently correct about Haskell code than any human Haskell mentor could be.)

PR review

Coding style and Haskell programming “taste” issues are especially hard to write tests for, but we can catch them in PR review. This is also a decent place to provide asynchronous answers to questions learners have. Many of our best learners will write down their questions or uncertainties as comments in the code, which gives us room to go into depth (with code examples, etc.) in an asynchronous way. However, once a PR comment chain grows to more than a few messages long, we start to suspect that a higher-bandwidth channel (like a quick Slack huddle) would be a better option to support the learner.

Including code review in our process lets us start guiding learners towards skill in “softer” areas, like naming, code architecture, choosing among multiple valid options, and designing types that represent their corresponding real-world domains as correctly as possible.

While code change suggestions can be useful for pointing out small, easily-fixed stylistic issues, it’s important not to fall into the trap of “here’s the right answer” on tougher questions. A virtue we want to encourage throughout the process is thoroughness in checking one’s own work. Writing “how can we check this?” is probably better than “this is wrong”, even though they have the same underlying meaning in effect. Eventually, we want learners to succeed in an environment without a Haskell mentor readily available.

One-on-one daily calls

Every learner does daily 30-minute calls with a Haskell mentor. Sometimes it’s surprising how much still comes up on a call after code has been through all the previous layers.

On calls, we’ll go through each completed exercise file, with the learner sharing their screen — they’re in charge of the discussion’s direction. We’ll start by bringing up any lingering questions, concerns, or areas of confusion. We also have a few bonus exercises prepared for every file in the repo. Mentors keep things quite Socratic, mostly asking questions rather than answering them. This call is a great chance to push learning a little further and ask learners questions that check in on their metacognition, not just that they’re getting the right answers to exercises.

Metacognitive feedback

There are plenty of higher-level behaviors that can help learners learn more quickly, smoothly, and effectively. We encourage these behaviors by calling out any good instincts learners naturally display, and by encouraging a few specific behaviors explicitly. A learner’s starting point barely matters relative to their ongoing behaviors in determining what level of skill they end up with at the end of two weeks. To improve these, a learner has to think about their own thinking process.

Thoroughness

Good engineers check their work. If nothing else, it’s a sign of respect to the engineers around them that they’re trying to do things right. A learner who’s thoroughly tested their own work before pushing it up for review is likely to be getting faster feedback and learning more quickly — they’re not as reliant on slower systems like PR review, and can take charge of their own learning. As with any other activity, staying organized, completing work ahead of time, and paying attention to detail are helpful qualities.

One thoroughness exercise to run with a learner live on a call is to repeatedly ask “can we make this code any better?”. For the first few iterations, the answer will be “yes”. However, mentors should never bottom out this process themselves. Even if the code is as good as it can possibly be, ask again. Occasionally, learners will spot an improvement that the mentor didn’t see, so it can be an exercise in humility. However, the more important lesson is that we should keep asking ourselves about potential improvements until we have good reasons for the answer to be “no”.

Using types

On the Training Team we tend not to be super opinionated about the minutiae of coding style, but we do answer the question “how should I think about Haskell code?” directly: “by reasoning with types”.

We want learners to be able to determine what the types are for every piece of an expression, using the typechecker and Hoogle as inputs into that system of thinking. Understanding how the pieces of an expression fit together is crucial as a way to break down big problems, but it also theoretically scales up to the size of the entire codebase. main is one giant expression of type IO () — after that, it’s subexpressions all the way down.

There is an aspect of formal precision in this, but we also think of types as good inputs to fast-and-loose human-centric reasoning. Types are abstractions, and abstractions let us think clearly about complicated topics. For example, when we talk about the String type we’re talking about an infinite object—the infinite set of all possible values that have type String. We’re no longer dealing in the realm of specific examples, and gain huge amounts of reasoning power by doing so.

Curiosity

When a learner runs into something they don’t know, their mentor shouldn’t gloss over that fact. Instead, mentors should encourage learners to quickly visit Hoogle and get a rough idea of what they’re dealing with. After the first couple days, we start asking this as a question: “where can we find that?” or “how would we figure that out?” Occasionally, learners will invent a mantra like “to the Hoogle!” which is pretty solid evidence that’s what their instincts would lead them to do when programming on their own.

One of the behaviors most indicative of learning success is noting points of confusion inline in the code to be brought up on a call or in PR review. Making your own confusions visible to a mentor who can help you sort them out can be a little humbling, but ultimately is a great sign you’re willing to put ego aside and learn as much as possible. Our top few learners of all time all made plenty of these specific kinds of notes, well beyond the normal range.

Transparent uncertainty

Sometimes, a learner will give a correct answer thanks to a lucky guess. The difference between being randomly correct and correct based on actual evidence is huge, but not highly visible to the outside. There’s a reason why essay questions exist in traditional schooling. Seeing into a learner’s thought process a little bit provides information that isn’t available by merely checking they can say the right answer. Learners can help mentors here by making their confidence levels more transparent. One way to do this is to say something like “I’m not too sure”, “I’m pretty sure”, or “I have no idea” before offering a guess.

Dealing with uncertainty is one of the reasons we prefer to help learners gain Haskell skill, rather than Haskell knowledge. As part of their daily work, they will absolutely run into things they’ve never seen before and have to make sense of them. A foundation built on skill, especially when learning that skill involves directly dealing with the unknown, stands a much better chance of holding up over time.

Agency

The best learners are consistently the ones that assume responsibility for their own learning. Learners comfortable stretching themselves further to make sense of difficult topics, or trying to make extra connections beyond the exercises presented, get the most out of the program.

Learners who go after their first two weeks in a “learn as much as possible” spirit rather than “just getting by” have significantly better outcomes. We’re mostly observers of these phenomena, since it’s difficult to help a learner become way more agentic in a couple weeks, but we encourage the relevant behaviors wherever we see them.


In conclusion

Our new hires gain Haskell fluency super quickly. We think that’s mostly due to an unwavering focus on active practice, and giving learners an immense amount of feedback.

Although we may be a bit biased, we’re convinced this is the most effective program for learning Haskell in the world.

About the author

Mitchell Vitez is on the Engineering Training team at Mercury.

Share article

Table of Contents

Disclaimers and footnotes

Mercury is a fintech company, not an FDIC-insured bank. Banking services provided through Choice Financial Group and Column N.A., Members FDIC. Deposit insurance covers the failure of an insured bank.