<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://pappasbrent.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://pappasbrent.com/" rel="alternate" type="text/html" /><updated>2026-03-27T19:20:14+00:00</updated><id>https://pappasbrent.com/feed.xml</id><title type="html">Brent Pappas</title><subtitle>The official website of Brent Pappas</subtitle><entry><title type="html">Optimizing my Diet with Code</title><link href="https://pappasbrent.com/blog/2026/03/18/optimizing-my-diet-with-code.html" rel="alternate" type="text/html" title="Optimizing my Diet with Code" /><published>2026-03-18T00:00:00+00:00</published><updated>2026-03-18T00:00:00+00:00</updated><id>https://pappasbrent.com/blog/2026/03/18/optimizing-my-diet-with-code</id><content type="html" xml:base="https://pappasbrent.com/blog/2026/03/18/optimizing-my-diet-with-code.html"><![CDATA[<p>Finally, a quick way to find perfectly nutritious diets.</p>

<h2 id="why-search-for-perfect-diets">Why search for perfect diets?</h2>

<p>A healthy diet is essential for one’s well-being.
With a balanced diet, one thinks more clearly, has more energy, is more resistant to disease, and will likely live longer.
On the other hand, a poor diet can lead to brain-fog, fatigue, and disease.
So to seize the benefits of healthy eating whilst avoiding the drawbacks of malnutrition, it is vital to plan one’s diet carefully<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup>.</p>

<p>Yet planning a diet is difficult.
There are dozens of different vitamins and minerals that humans need, and a “perfect” diet would provide sufficient levels of all of them.
Additionally, there are hundreds of different foods to choose from, with each providing different amount of these nutrients.
This makes it challenging to find a combination of foods that provides all vitamins and minerals.</p>

<p>And existing diet planning tools are all lacking in some capacity.
<a href="https://www.dishgen.com/mealplan">Many</a> <a href="https://galaxy.ai/ai-diet-plan-generator">use</a> <a href="https://makemealplan.com/">AI</a>, which makes them inherently unreliable.
<a href="https://fitchef.com">Other</a> <a href="https://dietplanner.io">tools</a> <a href="https://mealplan.rex.fit">hide</a> their implementation from users, and so provide little faith that they aren’t just using AI as well.
And some tools are <a href="https://www.eatthismuch.com/">paid services</a>, which some people be unable to afford.</p>

<p>So I wrote a free, open-source, non-AI program to generate “perfect” diets.
The key insight is that we can find a perfect diet by just trying all combinations of foods until finding one that satisfies all our daily recommended intake of vitamins and minerals.
In fact, the only thing stopping people from doing by this by hand is the sheer number of combinations one would need to try.
But computers are perfect for this sort of task, being designed to process large amounts of data.
And the technique of solving problems by simply exhausting all possible solutions already has a name: backtracking.</p>

<h2 id="finding-perfect-diets-with-backtracking">Finding “perfect” diets with backtracking</h2>

<p><a href="https://en.wikipedia.org/wiki/Backtracking">Backtracking</a> is a well-studied category of algorithms which dates back to the <a href="https://www.google.com/books/edition/Handbook_of_Constraint_Programming/Kjap9ZWcKOoC?hl=en&amp;gbpv=1&amp;pg=PA14&amp;printsec=frontcover">1950s</a>.
A backtracking algorithm is one which incrementally builds its way to a solution that satisfies a given set of constraints, by exploring each “path” to the solution one at a time.
If the algorithm explores a path that would prevent the solution from satisfying the problem’s constraints, then the algorithm abandons that potential solution, “backtracks” to the most recent solution that would still be able to satisfy the constraints, and explores a different path to the solution instead.
To make things more clear, let’s see how we can use backtracking to solve our problem of creating a “perfect” diet, one which provides 100% of one’s daily recommended intake of all vitamins and minerals.</p>

<h2 id="minimal-example">Minimal example</h2>

<p>To keep the example simple, let’s focus on a diet providing 100% of your daily recommended intake of just one vitamin, B12, and one mineral, iron.
And to make this example even simpler, we’ll restrict ourselves to only eating eggs and toast for breakfast<sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">2</a></sup>
The diet should also include no more than 2 servings of eggs or bread, because we’ll assume you prefer to eat a light breakfast.
Backtracking can solve this problem by trying all possible combinations of servings of eggs and toast with these requirements.
This is illustrated in the following diagram:</p>

<p><img src="/assets/img/food-1.svg" alt="All combinations of servings of eggs and toast, with each combination containing at most 2 eggs and 2 toast." /></p>

<p>Notice that we only explore combinations with at most 2 servings of egg or toast, and avoid wasting time exploring combinations involving more than 2 servings of eggs or toast.</p>

<p>Now that we have all combinations of eggs and toast, we need a way to check which combinations actually satisfy 100% of our daily recommended intake of iron and B12.
To do this, we first need to know how much iron and B12 one serving of eggs and toast each provide.
For the sake of example, let’s pretend eggs and toast provide the following nutrients:</p>

<table>
  <thead>
    <tr>
      <th> </th>
      <th>Iron</th>
      <th>B12</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><strong>Eggs</strong></td>
      <td>25%</td>
      <td>50%</td>
    </tr>
    <tr>
      <td><strong>Toast</strong></td>
      <td>50%</td>
      <td>0%</td>
    </tr>
  </tbody>
</table>

<p>Armed with this information, let’s return to the previous diagram, coloring in green those combinations of eggs and toast which achieve 100% of our recommended daily intake of both iron and B12.</p>

<p><img src="/assets/img/food-2.svg" alt="All combinations of servings of eggs and toast. Green combinations satisfy our 100% daily recommended intake of iron and B12." /></p>

<p>From this diagram, we can see that there are two possible <em>unique</em> combinations of eggs and toast that satisfy our intake requirements for iron and B12: 2 eggs with 1 piece of toast (for 100% iron and B12), and 2 eggs with 2 pieces of toast (for 150% iron and 100% B12).
This illustrates the big idea of the algorithm: just try all possible combinations of foods, and emit any that satisfy at least 100% of our daily recommended intake of all vitamin and minerals.</p>

<p>But although this algorithm works, in practice it would be horribly slow, because we would want to explore many more combinations of foods with a greater a maximum number of servings per food.
So now let’s see how we can speed it up.</p>

<h2 id="optimizations">Optimizations</h2>

<h3 id="stopping-once-we-find-a-solution">Stopping once we find a solution</h3>

<p>One easy way to optimize this algorithm would be to simply stop exploring the search space once we reach our first solution.
For instance, let’s assume we explore each node in the search tree by trying that node’s combination first, then its left subtree’s combinations, and finally its right subtree’s combinations (this would be an <a href="https://en.wikipedia.org/wiki/Tree_traversal#In-order,_LNR">in-order</a> tree traversal, a kind of <a href="https://en.wikipedia.org/wiki/Tree_traversal#Depth-first_search_implementation">depth-first search</a>).
With this approach we would avoid exploring much of the tree.
The following diagram illustrates this idea, with the green node signifying our solution, and gray nodes signifying combinations that we skip checking.</p>

<p><img src="/assets/img/food-3.svg" alt="All combinations of servings of eggs and toast. The green combination satisfies our 100% daily recommended intake of iron and B12. Gray combinations are ones we would skip exploring because we would have already reached a solution before we would explore them." /></p>

<p>We skipped checking 15 of 19 possible solutions; quite the speedup.</p>

<h3 id="avoiding-redundant-work-with-memoization">Avoiding redundant work with memoization</h3>

<p>We were able to greatly accelerate our algorithm by stopping once we found a single solution, but eating the same thing everyday can become boring.
So is there a way we can optimize our approach to go faster while still finding all (or at least, more than one) solution?
In fact, there is!
If we take a second look at the diagram, we’ll notice that we explore some combinations more than once (in the following diagram these combinations are red).</p>

<p><img src="/assets/img/food-4.svg" alt="All combinations of servings of eggs and toast. Red combinations are ones that appear at least twice." /></p>

<p>To make make our algorithm faster, we can keep track of the combinations we’ve already explored, and skip re-evaluating them if we encounter them again.
For the above example, this would result in the following search space, with solutions in green, and skipped combinations in gray.</p>

<p><img src="/assets/img/food-5.svg" alt="All combinations of servings of eggs and toast. The green combination satisfies our 100% daily recommended intake of iron and B12. Gray combinations are ones we would skip exploring because we would have already explored identical combinations before." /></p>

<p>With this optimization, we skip checking 10/19 combinations.
This isn’t quite as fast as returning after the first solution (that would have skipped 15 combinations), but the upside is that we now obtain all solutions.
This technique of remembering prior inputs to avoid redundant computation is called <a href="https://en.wikipedia.org/wiki/Memoization">memoization</a>, and our usage of it makes our algorithm an example of top-down <a href="https://en.wikipedia.org/wiki/Dynamic_programming">dynamic programming</a>.</p>

<h2 id="try-it">Try it!</h2>

<p>You can check out my implementation of this approach on
<a href="https://github.com/PappasBrent/diet_finder">GitHub</a>.
Here’s some sample output, a diet which satisfies all your daily vitamin and mineral needs!<sup id="fnref:3" role="doc-noteref"><a href="#fn:3" class="footnote" rel="footnote">3</a></sup>:</p>

<pre><code class="language-txt">Diet 1
  Bell pepper             400 grams    80.0 calories
  Carrot                  400 grams    164.0 calories
  Chicken                 200 grams    478.0 calories
  Peanut butter           100 grams    598.0 calories
  Salmon                  200 grams    412.0 calories
  Spinach                 100 grams    23.0 calories
  Tofu                    100 grams    144.0 calories
  Total                   4300 grams    1899.0 calories
</code></pre>

<p>You can even customize the tool to generate diets from different lists of
foods, and change the maximum number of servings and calories that you want
each diet to provide. I encourage you to try it out :)</p>

<h2 id="notes">Notes</h2>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:1" role="doc-endnote">
      <p>If you want to learn more about nutrition in a fun and easy way, I recommend checking out the YouTube channel <a href="https://www.youtube.com/@Talon_Fitness">Talon Fitness</a>. My girlfriend recently discovered this channel and shared it with me, and I came up with the idea for this post and algorithm after watching a few of their videos. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:2" role="doc-endnote">
      <p>This technique and its implementation supports finding diets that use more foods to satisfy our requirements for more vitamins and minerals; these restrictions are again just for the sake of example. <a href="#fnref:2" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:3" role="doc-endnote">
      <p>Specifically, this diet provides 100% of your daily recommended intake of vitamin A, D, B1, B3, B6, B12, E, C, B2, B5, Folate, and K; as well as 100% of your daily recommended intake of the minerals Calcium, Magnesium, Potassium, Zinc, Manganese, Iron, Phosphorous, Sodium, Copper, and Selenium. <a href="#fnref:3" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name></name></author><category term="blog" /><summary type="html"><![CDATA[Finally, a quick way to find perfectly nutritious diets.]]></summary></entry><entry><title type="html">How I wasted $400 on a Keyboard</title><link href="https://pappasbrent.com/blog/2025/06/07/how-i-wasted-$400-on-a-keyboard.html" rel="alternate" type="text/html" title="How I wasted $400 on a Keyboard" /><published>2025-06-07T13:00:00+00:00</published><updated>2025-06-07T13:00:00+00:00</updated><id>https://pappasbrent.com/blog/2025/06/07/how-i-wasted-$400-on-a-keyboard</id><content type="html" xml:base="https://pappasbrent.com/blog/2025/06/07/how-i-wasted-$400-on-a-keyboard.html"><![CDATA[<p>If you can already work effectively and comfortably on a normal keyboard, don’t
waste your money on an ergonomic one.</p>

<p>In this post I talk about why I bought the ZSA Voyager, why I didn’t really
like it, and why I ultimately returned to using a traditional QWERTY keyboard.</p>

<h2 id="why-did-i-get-a-new-keyboard">Why did I get a new keyboard?</h2>

<p>I purchased an ergonomic keyboard for three reasons.</p>

<ol>
  <li>
    <p><strong>Enjoyment</strong>: I enjoy typing on my laptop’s keyboard, and thought I would enjoy
typing even more on a higher quality board.</p>
  </li>
  <li>
    <p><strong>Comfort</strong>: I like programming and want to ensure that my hands stay
healthy so that I can continue to write code for the rest of my life.</p>
  </li>
  <li>
    <p><strong>Productivity</strong>: I hoped that an ergonomic keyboard would make me more
productive.</p>
  </li>
</ol>

<p>I thought that an ergonomic keyboard would help me achieve these goals, but was
wrong on all accounts:</p>

<ol>
  <li>
    <p><strong>Displeasure</strong>: The learning curve for an ergonomic board is so ridiculously
long that I began to hate typing. I thought that this feeling would go away
if I just practiced more, but after several months of practice it didn’t.</p>
  </li>
  <li>
    <p><strong>Pain</strong>: Despite trying to follow proper typing posture when using my new
board, I ended up hurting myself in various ways.</p>
  </li>
  <li>
    <p><strong>Inefficiency</strong>: My productivity tanked for months because typing was no longer
second nature to me, and I was perpetually distracted from working on other
tasks because I felt a constant need to practice typing.</p>
  </li>
</ol>

<p>I am going to explain all these points in more detail later on in this post,
but first I’ll explain why I specifically chose to purchase the Voyager.</p>

<h2 id="why-did-i-choose-the-voyager">Why did I choose the Voyager?</h2>

<p>I decided to buy the ZSA Voyager, and not one of the other high-end ergonomic
keyboards out there (e.g., the Dygma Defy, the Kinesis Advantage 2, the
Glove80), because it seemed to have the most consistently positive reviews
online, and (this is going to sound immature) honestly looked the coolest to
me. I had seen a bevy of YouTube videos lauding and magnifying the Voyager for
its ergonomic design, sleek structure, and productivity-boosting customization
options. On Reddit, people rave about the board and all its features. I even
have a friend who owns this board, and he persuaded me to get it instead of the
other options.</p>

<p>I’m now convinced that most people who praise the Voyager either:</p>

<ol>
  <li>
    <p>Actually suffer from the same problems I did, and tell themselves that they
love the Voyager to cope with spending a colossal amount of money on it. I
think this is the more likely option because people are usually reluctant to
admit when they are wrong.</p>
  </li>
  <li>
    <p>Were not skilled typists to begin with, and the Voyager was the first board
they learned to type properly on, and so typing on the board actually feels
natural to them. I find this option unlikely because the Voyager is such an
expensive and niche keyboard that I’d expect someone to be at least somewhat
interested in keyboards and/or typing before buying one.</p>
  </li>
</ol>

<p>To explain why I feel this way, I’m going to share my journey with the board.
Before I do that though, I think I should provide additional context by
describing the board itself.</p>

<h2 id="the-voyager">The Voyager</h2>

<div class="row row-centered">
<img src="/assets/img/voyager-512.jpg" alt="My ZSA Voyager" class="rounded-border" />
</div>

<p>The Voyager, shown above, is a very unique keyboard. Here’s a list of its key
features, starting with the ones that I actually liked (or could at least
appreciate), and ending with the ones that I really didn’t care for.</p>

<ul>
  <li>
    <p><strong>Split design</strong>: This enables one to “open up their shoulders” (that’s what
I often see and hear people say anyway) when typing by placing the two halves
of the board directly in front of them, shoulder-width apart.</p>
  </li>
  <li>
    <p><strong>Column-staggered layout</strong>: As opposed to the traditional row-staggered
layout, this layout is supposed to better mimic the shape of one’s hand
(e.g., the middle column of the board is the most aggressively staggered
because the middle finger is the longest finger).</p>
  </li>
  <li>
    <p><strong>Small number of keys</strong>: Each half of the board offers a mere 24 keys, with
2 extra thumb keys each, for a total of 52 keys. The idea here is to
customize your board using “layers” (more on this later) so that you can type
with a small number of keys with minimal finger movement.</p>
  </li>
  <li>
    <p><strong>Hot-swappable switches</strong>: You can easily change out the key switches the
Voyager comes with for different ones. I chose the Kailh Choc Pro Red
switches and didn’t have a problem with them, and I’m not really into
mechanical keyboards all that much, so I didn’t swap my switches.</p>
  </li>
  <li>
    <p><strong>LED back lights</strong>: The entire board is back-lit with LEDs. I personally see
LEDs as a gimmick and don’t care for them, so this meant nothing to me. In
fact, it actually makes the board worse in my opinion because it requires
that the board be wired to power the LEDs, which leads me to my next point…</p>
  </li>
  <li>
    <p><strong>Fully-wired</strong>: The board offers no wireless connection features. To use it,
you need to connect the left half to your computer, and then use another wire
to connect the left half to the right. These wires clutter one’s desk and
make the board more annoying to set up, so I’d much rather ZSA had ditched
the cute LEDs in favor of convenient wireless connection options.</p>
  </li>
</ul>

<p>Now on to my journey learning to use the Voyager, and why I eventually decided
it just wasn’t for me.</p>

<h2 id="the-board-arrives-and-the-struggle-begins">The board arrives, and the struggle begins</h2>

<p>I received and unboxed my Voyager in early January, and could immediately tell
that learning how to use it was going to be difficult. I was prepared for this
though, and began to practice doggedly in order to return to my former speed
(around 90 WPM).</p>

<p>The main reason why it was so hard for me to learn to type on the Voyager is
that I don’t strictly adhere to “proper” technique when touch typing with
QWERTY. For one thing, I usually use my left pointer finger to press the C key
instead of using my left middle finger. For another, I also often press the B
key with my right pointer finger instead of my left (on a traditional
row-staggered keyboard, this is just more comfortable). These quirks made it
especially difficult for me to type on the Voyager, because its split design
forced me to follow textbook touch typing technique.</p>

<p>The problem with learning to type with QWERTY “the right way” is that it makes
it much more painfully obvious just how poor of a layout QWERTY is for typing
(at least for English). It’s design is lopsided such that many more words can
be typed using only the left hand than the right hand. The only vowel on the
home row is the letter A, which means that in order to type most words, you
need to move your fingers around the keyboard more to reach more vowels.
Finally, some of the key placements are just strange (e.g., J, one of the least
frequently-used English letters, gets a spot on the home row under the right
pointer finger). These problems aren’t as salient on a non-split keyboard,
since you can adapt your typing to accommodate some of these issues (e.g., you
can make up for the lopsidedness by typing the B key with your right hand like
I do). With a split keyboard, however, you just have to live with them.</p>

<h2 id="i-accidentally-give-myself-rsi-problems">I accidentally give myself RSI problems</h2>

<p>So I started practicing typing each day on my new board for 30-60 minutes, and
soon began suffering from symptoms of repetitive strain injury (RSI). My
wrists, forearms, and elbows all hurt. It took me weeks to realize this, but it
was because of how I was typing on my new board.</p>

<p>First of all, I began to suffer wrist pain that I had never felt before when
typing on my laptop. This is because the Voyager comes with no palm rests, and
unlike my laptop keyboard, isn’t low profile enough to prevent me needing to
reach up with my palms in order to type on it. As I would later learn, reaching
with one’s palms like this while typing is a sure way to injure one’s wrists.
While I tried to avoid reaching like this by typing with my hands hovering
above the keyboard (a technique recommended by ergonomists, and that
programmers such as the Primagen claim to follow), this did not alleviate my
wrist pain at all, and actually led to my next problem.</p>

<p>My elbows started hurting. I think the reason for this is because since I was
hovering my hands above the keyboard to avoid wrist pain, I had to keep my arms
bent at a 90-110 degree angle at all times in order to type. Again, I had seen
ergonomists online (and in a book on RSI that my advisor had lent me) recommend
this. This problem also went away once I stopped using the Voyager.</p>

<p>Finally, my forearms began to hurt. I still don’t know what exactly caused this
pain, but it was worse in my right arm than in my left (I am right-handed), and
ceased once I returned to using my laptop keyboard.</p>

<h2 id="i-give-up-once">I give up once</h2>

<p>By this point I had had the board for two months, and while I could type at
about 90 WPM again (but definitely didn’t feel that fast), my new-found pain
had not gone away. Since injury-prevention was one of the main reasons I got
the Voyager in the first place, I decided to stop using the board so as to not
risk injuring myself further. I had had the board for too long to return it
though, so I instead boxed it up and placed it in my closet.</p>

<p>I still didn’t want to give up learning the board though, and its presence
lingered in my mind.</p>

<h2 id="i-try-again">I try again</h2>

<p>Two weeks passed, and was still thinking about the Voyager. I couldn’t believe
that the board could have been the cause of my injuries, and instead suspected
that I was to blame. There are plenty of ways that I could have hurt myself
that wouldn’t have been the board’s fault. Maybe I practiced too much and too
hard. Or perhaps my being forced to touch type properly on QWERTY was causing
me discomfort. Or I could have simply had poor posture.</p>

<p>So I decided to give the board another try. I resolved to do three things
differently this time around:</p>

<ol>
  <li>
    <p><strong>I would ditch QWERTY</strong>: QWERTY is actually a rather uncomfortable layout
for typing, mostly because of its uneven and impractical distribution of
commonly-used keys. So I opted to try a new, more efficient layout. After
doing some research on the <a href="https://cyanophage.github.io/">many popular keyboard
layouts</a> out there, I settled on learning
<a href="https://github.com/GalileoBlues/Gallium/">Gallium</a>. I chose to use Gallium
for three reasons:</p>

    <ol>
      <li>Gallium has more commonly-used letters on the home-row than QWERTY does.</li>
      <li>Typing in Gallium involves typing fewer same finger bigrams than QWERTY.
Same finger bigrams (SFBs) are pairs of adjacent letters in words that
must be typed with the same finger in order to follow “proper” technique.
For instance, try to type the word “decade” on QWERTY using proper touch
typing technique. Each time you type the bigram “de”, you need to use
your middle finger to press both keys. SFBs are uncomfortable to type,
and QWERTY has many more of them than more modern layouts like Gallium
do.</li>
      <li>I can easily configure my Voyager to use the Gallium layout. ZSA offers
an online tool called Oryx for customizing one’s keyboard layout, and
this made it easy to configure my board to use Gallium instead of QWERTY.
This is in contrast to other modern layouts like Graphite, which require
changing the shifted versions of some keys, a feature that Oryx does not
support. While I could learn to use software like QMK to do this, I
really prefer using a GUI interface to configure my board instead of
modifying configuration files.</li>
    </ol>
  </li>
  <li>
    <p><strong>I would focus on my posture</strong>: I would try harder to “hover” my hands over
the keyboard, sit with my back straight, and keep my elbows at a 90 degree
angle (or slightly obtuse even) without sticking them out to the side
(something I was doing earlier).</p>
  </li>
  <li>
    <p><strong>I would design my own symbol layer</strong>: Since the Voyager has so few keys
compared to a standard QWERTY keyboard, in order to access extra keys such
as symbols, one needs to create <em>layers</em> for their board. A layer is a
separate keyboard configuration that one can activate by holding down or
pressing another key. For example, on a standard keyboard, holding down the
shift key enters a layer which replaces lowercase letters with their capital
variants and numbers with symbols.</p>

    <p>I was previously using a symbol layer that a friend had designed for me, but
was finding it not quite to my liking, so I decided that I would design my
own. I started by doing some research on the topic of symbol layers, and
stumbled upon <a href="https://getreuer.info/posts/keyboards/symbol-layer/index.html">this
post</a> by
Pascal Getreuer, an applied mathematician at Google who sometimes writes
about keyboards and keyboard layouts. I followed some of the advice in his
post to design a symbol layer that I was happy with. You can check the layer
out for yourself below.</p>

    <div style="padding-top: 60%; position: relative;">
	<iframe src="https://configure.zsa.io/embed/voyager/layouts/6JpPX/latest/1" style="border: 0; height: 100%; left: 0; position: absolute; top: 0; width: 100%"></iframe>
</div>

    <p>Notice how the delimiters (e.g., ‘[’ and ‘]’) and certain special characters
like ‘&gt;’ and ‘-‘ are adjacent to each other. This is to make certain common
bigrams one types when programming, such as “()” and “-&gt;”, easier to type.</p>
  </li>
</ol>

<p>I was optimistic that all these improvements to my typing experience would make
typing on the Voyager more enjoyable and less painful - and to a certain degree
it did!</p>

<h2 id="all-is-well">All is well?</h2>

<p>I spent two months re-learning how to type with Gallium, and getting the hang
of my snazzy new symbols layer. I used <a href="https://www.keybr.com/">keybr</a> to
master each individual letter, <a href="https://monkeytype.com/">monkeytype</a> and
<a href="https://github.com/max-niederman/ttyper">ttyper</a> to practice typing
commonly-used words, <a href="https://www.speedtyper.dev/">speedtyper.dev</a> to practice
writing code, and finally <a href="https://ranelpadon.github.io/ngram-type/">Ngram
Type</a> to practice typing common
fragments of words. By the end of the two months, I could type rather
consistently at 70 WPM, and on monkeytype and ttyper I sometimes reached 90 or
even 100 WPM.</p>

<p>I eventually began re-learning the keyboard shortcuts for my most commonly-used
applications (Brave, Vim, Okular, VS Code), and while that was certainly not
fun, after a week or so the muscle memory began to sink in.</p>

<p>So all seems well, right? Except there’s still a few problems.</p>

<ul>
  <li>
    <p><strong>Typing on the Voyager still hurts</strong>: I read a book on RSI prevention,
watched hours of YouTube videos on proper typing technique, and even bought
small palm rests to help keep my wrists and forearms straight while typing on
the Voyager; but all to no avail. Try as I might, my wrists, forearms, and
especially my inner elbows still hurt. I really wanted to make the Voyager my
daily driver, but not if it meant injuring myself.</p>
  </li>
  <li>
    <p><strong>I can’t stand having so few keys</strong>: I hate having only 52 keys at my
disposal because it means that in order to use modifiers like shift, control,
meta, and alt, I need turn turn some keys into dual-function keys.
Dual-function keys are keys that perform one action when pressed, but another
action when held for some configured amount of time. Even though I used the
Voyager for months, dual-function keys never really felt natural to me. I’d
much rather have more keys each do one thing, and one thing only, even if it
means I sometimes have to reach for them.</p>

    <p>Exacerbating this issue is the fact that since the Voyager has so few keys,
you will almost certainly need to create layers for less frequently used
characters such as symbols. Each time you create a layer however, you the
need to add a way to access it. The most direct way to do this is with more
dual-function keys. Since I don’t really like dual-function keys, this made
working with layers rather painful, which is unfortunate because I feel like
layers are one of the Voyager’s main features.</p>
  </li>
  <li>
    <p><strong>I get second thoughts on learning Gallium</strong>: While Gallium is much more
comfortable than QWERTY, all software today is made with QWERTY-focused
keyboard shortcuts, so it’s just much more practical to follow convention and
use QWERTY. A perfect example of this is with Vim: In Vim, the keys hjkl are
used to move around because they are all right next to each other on the home
row of a traditional QWERTY keyboard. With Gallium, this is no longer the
case, and hjkl are instead all over the board. I was able to mitigate this
issue by creating a navigation layer that places the arrow keys in the same
positions as hjkl on a QWERTY keyboard. I used this layer to move around in
Vim more naturally, but honestly this felt like a shim I was using to solve a
problem I had just created for myself.</p>
  </li>
</ul>

<h2 id="i-accept-that-its-just-not-meant-to-be">I accept that it’s just not meant to be</h2>

<p>With all these issues in mind, I decide that the Voyager and I are just
incompatible, and put it back in its box. At least for now. I still have it in
my closet, and maybe one day I’ll give it another go. Perhaps next time will be
different, and I’ll be able to type on the board without pain. If not, I can
always just sell it.</p>

<h2 id="random-notes">Random notes</h2>

<ul>
  <li>
    <p><strong>I didn’t just waste money on the board</strong>: I also decided to purchase the
magnetic mount attachments that ZSA sells so that I could try tenting my
keyboard later. These also ended up being a waste of money, since it turns
out I could just tent my board with magnetic phone stands. I tried using the
mounts a few times anyway to mount my board to camera tripods so that I could
use it while standing, but this felt awful and awkward.</p>
  </li>
  <li>
    <p><strong>Gallium is (probably) still better than QWERTY</strong>: Although I’ve pretty much
stopped using Gallium, I still think it’s a better layout than QWERTY. The
<a href="https://docs.google.com/document/d/1W0jhfqJI2ueJ2FNseR4YAFpNfsUM-_FlREHbpNGmC2o/edit?tab=t.6r1v629nms0d">rolls</a>
are better, commonly-used vowels like a, e, and i are all on the home row,
and there are much fewer same finger bigrams compared to QWERTY. I’ve even
downloaded <a href="https://github.com/jtroo/kanata">Kanata</a>, a keyboard remapper, so
that I can type on Gallium on my laptop. I still practice Gallium every now
and then for fun.</p>
  </li>
  <li>
    <p><strong>I’ve made this mistake before</strong>: This isn’t the first time I’ve wasted
money on a keyboard and wasted time trying to learn a new keyboard layout!
When I was in undergrad I taught myself Colemak, and bought a 100% keyboard
from WASD Keyboards, with Halloween-themed key caps that matched the Colemak
layout. I practiced for months, but Colemak never really “clicked” for me
like QWERTY does, so I stopped using the layout and the keyboard. Two years
ago I tried picking both back up again, and got fairly fast, but after about
a month I still didn’t feel happy with Colemak, and gave away my fancy
candied keyboard.</p>
  </li>
</ul>]]></content><author><name></name></author><category term="blog" /><summary type="html"><![CDATA[If you can already work effectively and comfortably on a normal keyboard, don’t waste your money on an ergonomic one.]]></summary></entry><entry><title type="html">Why do a PhD?</title><link href="https://pappasbrent.com/blog/2025/05/25/why-do-a-phd.html" rel="alternate" type="text/html" title="Why do a PhD?" /><published>2025-05-25T13:00:00+00:00</published><updated>2025-05-25T13:00:00+00:00</updated><id>https://pappasbrent.com/blog/2025/05/25/why-do-a-phd</id><content type="html" xml:base="https://pappasbrent.com/blog/2025/05/25/why-do-a-phd.html"><![CDATA[<p>In this post, I describe the challenges of the Computer Science PhD, and I
believe they are ultimately why they are worth overcoming.</p>

<p>This post is a follow-up to a talk I gave at Sonoma State University earlier
this month, after my collaborator Dr. Suzanne Rivoire invited me to come and
present a guest lecture on graduate school life to her undergraduate students.
Huge thanks to Dr. Rivoire for inviting me to share my experiences, as well as
to my advisor Dr. Paul Gazzillo for giving me the idea to follow the talk up
with a blog post.</p>

<p>The first half of the talk was about the challenges that the PhD program
presents, and why I believe they are ultimately worth surmounting. In this post
I am only going to be discussing this first half of the talk. In the second
half of the talk I told my story as a CS PhD student at UCF for the past five
years. I plan to publish a blog post about my PhD journey after I graduate, so
check back in one or two years hence for that if interested.</p>

<h2 id="phd-overview">PhD overview</h2>

<p>The PhD is a great opportunity for one to not only become an expert in a
specific field, but also to improve their communication and problem-solving
skills, and ultimately grow as a person. The PhD program is not without its
challenges though, and I want to talk about these obstacles first before
discussing the reasons why I think the PhD is still worth pursuing.</p>

<h2 id="challenges-of-the-phd-program">Challenges of the PhD program</h2>

<h3 id="sense-of-opportunity-cost">Sense of opportunity cost</h3>

<p>When you begin the PhD there, you may feel like you are letting many
opportunities to accelerate your career and make a high income slip by you,
especially if you recently graduated with an undergraduate degree in a STEM
field. At least, this is what happened to me: when I graduated, two of my
friends who had graduated at the same time as me immediately went to work in
the software engineering field; one of them for Amazon, and the other for
Instagram. The friend who was working at Amazon has since left that position,
and is now at Meta working his way to becoming a senior principle engineer, the
highest title one can get in software engineering. I also have another friend
who, despite not having a degree in Computer Science, was dedicated enough to
teach himself C programming over the course of two years, and now makes six
figures as a programmer for a bank.</p>

<p>Meanwhile, I’ve been spending the last five years in graduate school making
nowhere near as much money. This doesn’t bother though because, while I
wouldn’t mind working as a software engineer and making a ton of money in the
process, what I really want to be is a professor, and the PhD is the best way
for me to achieve that dream. Just be aware though that if you are considering
a PhD because you think it will help you achieve a higher-income job, you will
have to first spend several years making just enough money to live
comfortably.</p>

<h3 id="criticism-can-be-harsh">Criticism can be harsh</h3>

<p>At many points throughout the PhD program, you will be inundated with
criticism. Most of this, at least at first, will come from your advisor, who
will (hopefully) be perpetually challenging your research ideas, and giving you
copious amounts of writing and presentation advice. If you are a proud person,
or just don’t generally like receiving advice that you may not have asked for,
then it can be difficult to accept this advice without feeling a little
insulted by it. Your advisor and colleagues aren’t trying to demean you though,
and if you can set aside your pride and take their recommendations to heart,
then you will vastly improve at conducting research, and at communicating your
results to others.</p>

<h3 id="the-phd-can-be-lonely">The PhD can be lonely</h3>

<p>One of the most exciting parts of the PhD is that it enables you to become the
most knowledgable person in the world on one ultra-specific topic; however the
potentially scary part about this is that it means that if you run into
unexpected problems during your research (and you certainly will), there isn’t
really anyone you can turn to for help. While your advisor and lab members will
be there offer you general guidance and support, there likely won’t be anyone
you can ask for assistance with the particular problems you are facing.
Ultimately, you must discover your own novel solutions.</p>

<p>Besides feeling isolated in your work, you may also feel isolated socially.
This is especially true if you are pursuing the PhD in a foreign country or
state, because your friends and family are less likely to be nearby for you to
spend time with. The solution to this problem is of course to simply make new
friends at your university, but it can be difficult to set aside time to make
new friends during the first year or two of your PhD when the pressure to
publish is often at its peak, especially if you aren’t an outgoing person to
begin with. Thankfully, social isolation was never a problem for me since
throughout graduate school I’ve lived with my friends, my brothers, and my
girlfriend; plus I’m only a few hours’ drive away from other members of my
family. I don’t know how common this situation is though (and I suspect its
rather uncommon), and if you are considering moving to a new country or state
for graduate school, plan to make new friends after you arrive.</p>

<h2 id="benefits-of-the-phd">Benefits of the PhD</h2>

<p>Despite all these obstacles, I still contend that the PhD is worth pursuing,
because it gives you the chance to do all the following activities:</p>

<h3 id="explore">Explore</h3>

<p>During the first year of the PhD, you have the chance to explore all the
currently unsolved problems in your chosen area of research, and then, once
you’ve discovered one that speaks to you, plan to spend several years working
to solve it. The fun of exploration does not end there, however, as once you
select your problem, you then explore all prior research related to your chosen
problem, and synthesize that information into a novel solution.</p>

<h3 id="think">Think</h3>

<p>The PhD program promotes deep and critical thinking skills, which are immensely
useful skills to have not just for graduate school, but for life in general.
During the PhD you will invest years of your life trying to solve one
ultra-specific problem. You will encounter all sort of obstacles; some of which
are to be expected and some of which may be unforeseeable. To overcome these
challenges you will usually need rich technical knowledge of your chosen
research area, which you can only truly obtain by spending hours reading
papers, textbooks, reference manuals, documentation, and more (while AI tools
can help you circumvent this tedious learning process by synthesizing available
research for you, I personally would not recommend using them to automate the
research process very often, because by doing so you are essentially
sacrificing the opportunity to learn something the hard way in favor of
immediate results). Once you perform this process of conducting deep research
on a topic the first time, you will realize that you can apply it to basically
any area of your life to accomplish a variety of goals totally unrelated to
your research (e.g., to become financially savvy, to master a hobby, or to
learn how to fix problems with your car or home without needing to always call
a professional).</p>

<p>In addition to deep thinking, the PhD program also sharpens your critical
thinking skills. When you first begin conducting research, you will be
overwhelmed with papers from a wide range of sources, and you won’t yet have
the ability do discern the high-quality papers from the low-quality ones. As
time goes on however, and you participate in reading groups and mock program
committees which give you the chance to see how more senior researchers judge
the merits of academic papers, you will construct your own criteria for grading
research. You will then start to appreciate papers and presentations that are
simpler to understand for their clarity and concision, and realize that
research that is more difficult to understand is not necessarily more
sophisticated or “better”, but perhaps just poorly-presented. This ability to
discriminate between high-quality research from less-quality work will prevent
you from accidentally getting tricked into believing ideas that may not
actually be all that credible, so that you don’t waste time trying to prove
them, and also don’t end up looking like a fool later for doing so. On the
other hand, being able to recognize when a work that has merit is just
poorly-presented enables you to be more gracious when reviewing others’ work
and when giving criticism, which helps improve the quality of research in the
field overall.</p>

<h3 id="communicate">Communicate</h3>

<p>Communication skills are vital to achieving success in many areas of life, both
professional and personal, and the PhD is no exception. The PhD program
provides you with many opportunities to hone your communication skills in the
following three forms:</p>

<h4 id="in-writing">In writing</h4>

<p>You will become much better at explaining technical information clearly and
concisely, and at persuading others why the work you are doing is meaningful
and useful. This is because in order to graduate from the PhD program, you
will need to write academic papers. It is very difficult to write high-quality
technical papers, especially if you haven’t done so before, because they are
extremely dense: each sentence in a conference paper needs to either motivate
your work, justify your experiment design, explain your methods and figures,
discuss your results, or do some combination of all these actions. Your
advisor will be giving you critical feedback and advice on how to improve at
all these skills.</p>

<h4 id="in-conversation">In conversation</h4>

<p>You will improve at explaining technical content to others at various levels of
complexity.</p>

<p>First, you will learn to explain your work in simple terms in order to share
what you are up to with your family and friends, since they likely have no
technical knowledge of your research area.</p>

<p>Next, when speaking with lab members or experts at academic conferences, you
will be able to dive into more detail about your work, and will also need to
speak persuasively about it if you hope to gain collaborators or professional
connections. This can be tricky, since while you can expect this audience to
have more technical knowledge about your area than the average person, you
can’t be sure exactly how much knowledge they have about your particular
subject.</p>

<p>Finally, you can delve into the greatest detail when discussing your research
with your advisor during your weekly status updates. However, even in this
situation it is crucial that you avoid getting caught in a tangle of
technical/implementation details because this eats into your advisor’s precious
time, and reduces the amount of valuable feedback that they can give you.</p>

<h4 id="when-presenting">When presenting</h4>

<p>Lastly, the PhD gives you many opportunities to hone your public speaking and
presentation skills (though unfortunately, in my experience many PhD students
graduate without improving much in these areas).</p>

<p>You will likely need to work as teaching assistant at one point or another
during the PhD, and by doing so will gain lots of experience explaining
technical content to classrooms full of students while leading labs. You will
quickly overcome any fears of public speaking, and improve at explaining
concepts and answering questions directly and clearly. You can even create a
review form for students to fill out and distribute it to your labs at the end
of the semester to receive feedback on how well you did as as TA and areas in
which you can improve.</p>

<p>Next, you will have the chance to practice public speaking in more high-stakes
environments at academic conferences. If you submit a paper to an academic
conference and it gets accepted, the conference organizers will expect you to
attend the conference to present your paper to other experts in your research
area. Each of these presentations provide a chance to make a good impression on
your research community, and to garner potential collaborators or employers, so
it is crucial that you take them seriously. Be willing to spend weeks preparing
conference presentations, and practice presenting as often as you can,
especially in front of colleagues who can provide meaningful feedback. If you
are planning on giving academic presentations in the future, check out <a href="/blog/2024/06/01/what-makes-a-good-presentation.html">my
post</a> on how to give
a great academic talk.</p>

<p>Finally, collaborators may occasionally invite you to give a talk to their
colleagues or students. Take these opportunities when they present themselves,
as they allow you to network, travel, and practice presenting all at the same
time.</p>

<h3 id="network">Network</h3>

<p>The PhD program not only provides a way for you to become an expert in a
specific field, but also to meet, speak, and work with other experts as well.
While it’s sometimes easier or tempting to work like a lone wolf and
collaborate with only your advisor, it is in your best interest to socialize
with and collaborate with others, because each person you work with opens more
opportunities for yourself later on down the line. Each person you meet could
potentially lead to new job offers and research ideas. In any case, it’s nice
to have more people to turn to for writing you letters of recommendation for
job and funding applications.</p>

<p>You may have heard the phrase, “It’s not what you know that’s important, but
rather who you know.” I don’t really like this saying, because it assumes that
you don’t need to be very well-informed about your work in order to develop a
strong professional network. Instead, I think what’s important is that you know
your research area well, <em>and</em> can speak clearly and convincingly about it
others - if you achieve this, then other people will <em>want</em> to get to know you.
Then you have the best of both worlds.</p>

<h3 id="grow">Grow</h3>

<p>All the above skills are not just useful for obtaining the PhD, but help you
achieve greater success, enjoyment, and fulfillment in life more generally.
Cultivating a sense of exploration helps keep you engaged in your work, so that
you don’t burn yourself out too quickly, and are thus able to be more
productive. Deep and critical thinking skills enable you to solve more complex
problems by breaking them down into smaller ones, and help you recognize the
validity of all the information available to you when conducting research.
Communication skills are paramount to success in just about all aspects of
one’s life, because if you can speak intelligently about what you do and
persuade others that it is important, more people will want associate
themselves with you, and either work with you or even for you. Finally, the
ability to network and develop connections with people in, e.g., industry,
academia, and the government unlocks more professional opportunities.</p>

<h2 id="its-free-sort-of">It’s free… sort of</h2>

<p>OK, the PhD isn’t exactly “free”, but more like “complimentary” so long as you
can obtain funding for your research. There are three main ways to do this.</p>

<h3 id="graduate-teaching-assistantship">Graduate teaching assistantship</h3>

<p>The most common way to fund your PhD is to work as a graduate teaching
assistant (GTA) for your university. In this position you typically spend hour
to eight hours a week grading assignments, holding office hours, and leading
labs (often for courses that your advisor teaches). In return, the school pays
for your tuition and gives a stipend just large enough for you to live
comfortably. If you enjoy teaching, then the GTA position can be an
entertaining way to pay for graduate school; however if you don’t like teaching
then it can be an annoying to need to set aside some time each week on GTA
responsibilities when you’d rather be doing research (which could help you
graduate faster).</p>

<h3 id="graduate-research-assistantship">Graduate research assistantship</h3>

<p>With a graduate research assistantship (GRA), your advisor uses their own
funding money to directly pay for your tuition and stipend. The upside to this
is that you no longer need to spend time each week on teaching, and may instead
devote more time to research. The downside is that if you enjoy teaching, then
with a GRA position you have fewer opportunities to do so. GRA funding is also
more difficult to obtain than GTA funding for two reasons: first, it requires
that your advisor have funding (which is something you should ask about when
searching for an advisor early on in the PhD), and second, it requires that you
prove to your advisor that you are deserving of it (since they will be paying
for it). These challenges are simple to overcome though so long as you make
sure to choose an advisor who has funding, and demonstrate to them that you
have basic time management and organization skills (seriously, if you just
maintain your own time sheets, research journal, and task checklists without
needing constant reminders, your advisor will probably be glad to give you a
GRA position).</p>

<h3 id="fellowships">Fellowships</h3>

<p>Fellowships are funding sources that organization like the NSF and companies
like Google offer to graduate students who demonstrate strong research skills
and potential. This is the hardest funding source to obtain since the
fellowship application process is usually highly competitive and often requires
three or more letters of recommendation from your research collaborators (and
early on in your research career you may not even have that many collaborators,
or done enough work, to obtain strong recommendation letters). However,
fellowships are also perhaps the most desirable source of funding, because they
enable you to pay your own way through graduate school without needing to rely
on your school or advisor to pay for you.</p>

<p>Having a fellowship also makes you a more desirable candidate for potential
advisors, since having a fellowship not only means that they don’t have to pay
for you, but also indicates that you are already capable of doing research
(since strong research skills are required to earn most fellowships). This in
turn provides you with more freedom to work on whatever research problems you
want, instead of needing to work on a problem your advisor has already picked
out (which isn’t always a bad thing) for you in order for them to be willing
to fund you.</p>]]></content><author><name></name></author><category term="blog" /><summary type="html"><![CDATA[In this post, I describe the challenges of the Computer Science PhD, and I believe they are ultimately why they are worth overcoming.]]></summary></entry><entry><title type="html">365 Days of Duolingo</title><link href="https://pappasbrent.com/blog/2024/10/12/365-days-of-duolingo.html" rel="alternate" type="text/html" title="365 Days of Duolingo" /><published>2024-10-12T13:00:00+00:00</published><updated>2024-10-12T13:00:00+00:00</updated><id>https://pappasbrent.com/blog/2024/10/12/365-days-of-duolingo</id><content type="html" xml:base="https://pappasbrent.com/blog/2024/10/12/365-days-of-duolingo.html"><![CDATA[<p>This past week I finished a 365 day-long streak of learning Greek on Duolingo!</p>

<h2 id="why">Why?</h2>

<p>I started this challenge last year shortly after finishing my <a href="/blog/2023/09/30/365-days-of-leetcode.html">LeetCode
challenge</a> for three
reasons.  First, since I’m half Greek on my dad’s side, I felt that learning
Greek would help me get more in touch with my heritage. At the very least, it
would make my paternal grandparents happy, since they’re native-born Greeks.
Second, I wanted something else to do each day since I was no longer solving
Leetcode challenges.</p>

<h2 id="how-did-it-go">How did it go?</h2>

<p>I definitely feel like I know more Greek! Though I will admit, I was more
enthusiastic about Duolingo in the first month or two of the challenge than I
was during the rest of it. By then I wasn’t as invested in learning and would
only open the app to do my daily practice. Because of this I never reached the
diamond league, but maybe one day I’ll return and try to make it there. Despite
this, I still feel like I learned enough Greek to go on vacation in, e.g.,
Athens or Sparti.</p>

<p>Also, I have to compliment the Duolingo team: they really know how to market
the app, and the designers and developers must play videogames or study game
design because the app makes learning a language so much fun. From the art
design to the sound effects, and especially the way one’s phone vibrates when
they get a correct answer, the whole experience just feels so <em>juicy</em>.</p>

<h2 id="whats-next">What’s next?</h2>

<p>I’m putting down Duolingo for now, but I may return to it later. For now I want
to enjoy at least a few weeks without working on any year-long challenge.</p>]]></content><author><name></name></author><category term="blog" /><summary type="html"><![CDATA[This past week I finished a 365 day-long streak of learning Greek on Duolingo!]]></summary></entry><entry><title type="html">What makes a good presentation</title><link href="https://pappasbrent.com/blog/2024/06/01/what-makes-a-good-presentation.html" rel="alternate" type="text/html" title="What makes a good presentation" /><published>2024-06-01T13:00:00+00:00</published><updated>2024-06-01T13:00:00+00:00</updated><id>https://pappasbrent.com/blog/2024/06/01/what-makes-a-good-presentation</id><content type="html" xml:base="https://pappasbrent.com/blog/2024/06/01/what-makes-a-good-presentation.html"><![CDATA[<p>My mentors have shown me what makes for a good presentation, and now I can’t
un-see the fact that most presentations are just awful. After reading this post
you won’t be able to un-see it either - and you’ll learn how to stand out from
the crowd by making an excellent presentation instead.</p>

<h2 id="its-easy-to-make-a-bad-presentation-on-accident">It’s easy to make a bad presentation on accident</h2>

<p>The first reason that most presentations are boring is that for most people it’s
easier and more intuitive to make a boring presentation than it is to make an
exciting one. For example, many people think it’s a good idea to end their talk
with a “Questions” slide. This seems natural: many presentations end with a Q&amp;A
session, so it “makes sense” to have a slide indicating that the Q&amp;A session has
begun. In reality though, such “Questions” slides are utterly useless. I explain
why in more detail in the
<a href="#tenets-for-making-excellent-presentations">next section</a>, however the biggest
reason is that “Questions” slides get the longest screen time of any of the
slides of a talk while simultaneously providing no actual content. Instead, a
far better approach is end one’s talk with a well-organized conclusion slide
that advertises oneself while simultaneously reiterating their presentation’s
key takeaways. This is better because it reminds the audience who the presenter
is and why the listeners should care about what they just said, and gives people
a foothold for asking questions about the work. It’s of course much harder to
make a proper conclusion slide though, so that’s one reason why they are less
commonly seen.</p>

<p>The next (and perhaps more important) reason that people make inferior
presentations is that they were simply never taught how to make superior ones.
Again, most people don’t know how to make good presentations to begin with, and
so most therefore aren’t qualified to give advice on how to compose a fine
presentation. This leads to a positive feedback loop where the blind lead the
blind, and in the end hardly anyone knows how to put together a decent set of
slides!</p>

<p>At this point you’re probably thinking, “OK <em>genius</em>, so if most presentations
suck, how about you stop complaining about them and tell me how to make one that
doesn’t?” The answer is simple: Just watch the <a href="https://youtu.be/sT_-owjKIbA?si=cFhFNPJuER4eY3vw">Simon Peyton Jones
talk</a> on how to give a great
presentation. That’s it, just watch that talk and you’ll be a certified grade-A
presenter!</p>

<p>…Just kidding of course. Don’t get me wrong, that talk is really good, and you
should try to follow most of SPJ’s presentation advice most of the time.
However, his talk is very high-level, and for people who are absolute beginners
at giving talks (and I contend that most people are), there is not enough
concrete actionable advice to follow. Simply put, SPJ leaves out many of the
finer, more granular details on how to forge and deliver a captivating
presentation. So here’s my (opinionated) list of things to do to make a
magnificent presentation that will earn you the recognition you deserve for all
your hard work!</p>

<h2 id="tenets-for-making-excellent-presentations">Tenets for making excellent presentations</h2>

<h3 id="go-light-on-the-text-heavy-on-the-visuals">Go light on the text, heavy on the visuals</h3>

<p>You should strive to have as few words on your slides as possible without
sacrificing clarity because it will make it easier for people to pay attention
to and understand your talk. Any words on your slides will compete with you for
your audience’s attention, so by having fewer words on your slides, your
audience will more easily be able to focus on you and what you are saying. This
also makes it easier for the audience to understand what you are saying as well,
because their attention won’t be divided between you and your slides.</p>

<p>On the other hand, you <em>should</em> go out of your way to add fun, simple visuals to
your talk to help explain or complement what you are saying. You can leverage
the old adage, “a picture is worth a thousand words” to your advantage by using
images to explain complex concepts clearly and concisely. The correct image can
help the audience comprehend what you are trying to say far faster than text
can, and without nearly as much cognitive overhead on the listener’s end. This
means that they’ll understand what you are trying to say more quickly and
easily, and as a result will be more likely to keep listening to you.</p>

<p>I recommend using <a href="https://www.flaticon.com">flaticon</a> for images.
<a href="https://www.kpmoran.com/">Dr. Kevin Moran</a> (the winner of the 2024 ACM SigSoft
Early Career Researcher Award) recently recommended it to me, and I had great
success when I used it to make my
<a href="https://youtu.be/OU7kh0YX-Kk?si=Wlj-e6xaGDlFLhi3">ICSE 2024 talk</a>.</p>

<p>“But Brent,” you may be thinking, “how can I explain my super complex
presentation topic without words?” This brings me to my next point…</p>

<h3 id="treat-your-talk-like-a-sales-pitch">Treat your talk like a sales pitch</h3>

<p>Craft your presentation as if you were making a high-level (i.e., not very
detailed) advertisement to deliver to potential investors of your work. The
point of the talk is <em>not</em> to explain every little detail about your work, but
to <em>excite</em> the audience and encourage them to learn more about it (e.g. by
collaborating with you, reading your paper, buying you product, or asking you
questions). For instance, if you were to give a talk about a new battery you
invented, you would just say “You can save money by buying my batteries, because
they last 40% longer than other batteries and thus don’t need to be replaced as
often.” You would want to avoid talking about the specifics as to <em>how</em> your
batteries manage to last so long, since your audience more than likely would not
care about these details.</p>

<p>Put another way: don’t tell the audience all the cool things about your
idea/product/technique, and expect them to realize on their own why it’s a great
piece of work that they should care about. Instead, just tell the audience why
they should care about your great new idea, and provide a very brief intuition
as to how it works. Furthermore, by omitting such details from your main
presentation, you give the audience very obvious questions to ask that you can
more easily prepare for. Speaking of which…</p>

<h3 id="anticipate-and-prepare-for-questions">Anticipate and prepare for questions</h3>

<p>Predict the sorts of questions your audience will ask ahead of time and prepare
to answer them. This will show the audience that you are knowledgable about your
presentation subject, and earn you more of their respect. One great way to do
this is to prepare a first draft of your slides with too much detail, and as you
refine your presentation, gradually cut unnecessary details out of the main talk
and move them to an extra section after your talk’s conclusion solely for
answering questions about these details. It may feel like extra work to prepare
slides that may never get used if the audience doesn’t ask the questions you
expect them to ask, but if you’re going to cut the content out of the slides
anyway it doesn’t require much effort to just append them to the end of the
presentation instead. Moreover, with judicious content pruning you can subtly
guide the audience to ask the exact questions you want them to ask by leaving
seemingly obvious omissions from the main presentation. This must be done
carefully though, otherwise you run the risk of leaving too many apparently
obvious details out of your talk and looking like a fool. Unfortunately I don’t
have any concrete advice on the best way to do this (yet).</p>

<p>While all presenters should prepare for questions, the approach of putting extra
slides for questions at the end of your presentation may not work so well if you
plan to accept questions in the middle of your talk instead of waiting to take
questions at the end. This is because you should…</p>

<h3 id="only-move-forward">Only move forward</h3>

<p>You should never go back to a previous slide while giving your talk because
doing so makes it harder to follow what you are saying. People will understand
your presentation more easily if it flows smoothly from one slide to the next.
Furthermore, if you need to go back to a previous slide to explain something on
a later slide, that suggests that your slides were not prepared in the
appropriate order to begin with. Astute members of the audience <em>will</em> notice
this, suspect that you don’t really know what you are talking about, and choose
to stop listening to you. To prevent this from happening you should prepare and
practice presenting your slides from start to finish without stopping or going
back. The first step to doing this is to…</p>

<h3 id="nail-the-first-impression">Nail the first impression</h3>

<p>If your presentation is an advertisement for your work, then your first slide is
the advertisement for the advertisement. It serves two purposes: to inform and
intrigue. When creating the first slide, be sure to include the obvious details
such as the title of the work, the names of the authors (perhaps accompanied by
images of them) and their institutions, the names of any agencies that funded
the work, and the presentation venue and date. This helps people attending your
talk confirm that they are in the right room, and makes it easier for others to
find your talk online in the future. If the first author is not the one giving
the talk, make that clear on the title slide as well by writing the name of the
author presenting the work in bold and by including images of all the works’
authors (assuming space allows for this).</p>

<p>When you present your first slide do not say the name of your talk, because the
title of your talk is on the first slide anyway and your audience (presumably)
can read. This advice is even more important to follow if you are presenting at
a conference because the session organizer will probably read the title of your
talk before you even begin presenting as well. Instead of reading the title of
your talk, introduce yourself and give a very brief overview of what you will be
talking about. For example, when I gave my ICSE 2024 talk I did not say, “Hello
everyone, today I’m presenting the work <em>Semantic Analysis of Macro Usage for
Portability</em>”. I began my talk by saying, “Hi everyone, my name is Brent, I am a
PhD student at the University of Central Florida, and today I’m excited to talk
with you all about macros”. You want to garner the audience’s interest early on
and build “attention momentum” (a phrase I just made up and am already thinking
about patenting), so that they’ll be willing to focus on your whole talk without
losing interest. Show the audience how much you care about what you’re about to
talk about, and it can rub off on them. In the words of
<a href="https://www.goodreads.com/quotes/868021-to-be-interesting-be-interested">Dale Carnegie</a>,
the best way to be interesting is to be interested.</p>

<p>If you tend to get nervous when presenting in front of crowds, then one way to
overcome this apprehension is to memorize the first few sentences of your
presentation. That way you’ll crush your first few slides, and feel pretty
confident going into the rest of the talk. When you get to your slides that you
haven’t entirely memorized though, just be careful that you…</p>

<h3 id="do-not-read-off-your-slides">Do not read off your slides</h3>

<p>This will totally obliterate your audience’s interest in what you are saying. If
your slides already say everything you are going to say, then the audience may
no longer feel the need to listen to you since they can just read your slides
instead. Some members of the audience may try to keep listening to you, but they
will likely have a hard time doing so because their attention will be split
between your written words and your spoken words.</p>

<p>Also, reading directly off slides is just a lazy way to present that will almost
certainly annoy your audience. It doesn’t require practice and turns your talk
into a monotonous lecture. This is another reason why your slides should have a
<a href="#go-light-on-the-text-heavy-on-the-visuals">minimal amount of text</a>. People
don’t want to hear you go on and on about something, they want you to…</p>

<h3 id="get-to-the-point">Get to the point</h3>

<p>Try to state the core idea behind your work, and why your audience should care
about it, as quickly and concisely as possible. Your audience almost certainly
<em>does not care</em> about how your new solar technology converts photons into
electricity. They almost certainly <em>will care</em> about how cheap it is, how much
money it will save them on their electric bill, and how soon they can expect to
receive a return on investment once they buy it. If this sounds like advertising
<a href="#treat-your-talk-like-a-sales-pitch">that’s because it is</a>. When you’re trying
to persuade someone to buy your product/read your paper/invest in your startup,
the first thing you need to ask yourself is “What’s in it for them? Why do they
care?” The sooner you answer those questions during your presentation, the
sooner you earn your audience’s attention.</p>

<p>Here’s a trick I use to figure out how to explain a topic quickly without
beating around the bush. First, I write a paragraph explaining the idea in a
fair amount of detail. I don’t focus on concision at all; my goal is just to
explain the topic as well as I can using as much text as I need. Once I’m done,
I go back and review the final sentence in that paragraph. More often than not,
that final sentence is the key idea I’m trying to convey to my reader or
listener. If it is, then I move that last sentence to the very beginning of my
explanation, and adjust the rest of my explanation to accommodate this change in
structure. When I do this I often find that many parts of my original paragraph
were entirely unnecessary to explain my key takeaway. I cut these useless parts
out, and the explanation becomes much simpler and more straightforward than
before.</p>

<p>After you’ve established the main idea behind your work and made it clear why
your audience should care, you can start to explain more of the context around
it (e.g., more background on the problem your idea solves and how it works). A
great way to do this is to…</p>

<h3 id="give-examples">Give examples</h3>

<p>People are hard-wired to recognize patterns and learn best by example, so you
should furnish your talk with concrete examples to help explain how your work
solves a particular problem. Great problem examples not only provide context as
to why your work is important, but can also excite your audience to see how your
work solves the problem. On the other hand, don’t try to explain the insight
behind your idea first and then give an application of its usefulness, because
this can confuse and bore people.</p>

<p>For example, let’s say you are giving a talk on why math is important. What you
<em>would not</em> want to do is spend ten minutes explaining all the rules of
arithmetic, and then briefly mention vaguely that math is the cornerstone of
technological progress. What you <em>would</em> want to say is, “You can learn how to
budget more effectively and save money by using math”, or “In the 1960s humans
transcended the limitations of gravity and flew to space by using mathematical
formulas”. Then, after you’ve earned the audience’s interest with your stellar
examples, would you want to start explaining how arithmetic works.</p>

<p>You’re probably going to give a talk on a topic much more complex than the
importance of math, though, and won’t be able to use such simple and obvious
examples. That’s not a problem, because examples are also effective for
explaining complex ideas as well. To explain a complex topic, choose quality
examples over a quantity of them. Start with a simple example that doesn’t
illustrate all the complexities and edge-cases of your work, and then…</p>

<h3 id="introduce-complexity-gradually">Introduce complexity gradually</h3>

<p>To explain a complex idea, start with a simple, limited example of the idea, and
then slowly layer complexity on to it throughout your presentation. For
instance, let’s pretend I’m giving a talk on home winemaking (a recent hobby of
mine):</p>

<ul>
  <li>
    <p>First, here’s the simplest example of how to make wine: Add grapes, water, and
yeast to a clean bucket, wait a few weeks, and bam, you have alcoholic fruit
juice. This works because yeast eat sugar (which grapes are full of) and turn
it into alcohol. This process is called fermentation.</p>
  </li>
  <li>
    <p>Now let’s make a stronger wine with a higher alcohol-to-water ratio (i.e., a
higher ABV). To do this, add extra sugar to the bucket before fermentation
begins. Since the yeast will have more sugar to eat, they will produce more
alcohol. (Notice how I’m building off the prior example, which introduced the
fundamental concept of fermentation). Unfortunately, this creates a new
problem: there is a limit to how much alcohol yeast can live with, so our
yeast may actually die from the alcohol they are producing before our wine
reaches our desired ABV. No more yeast, no more alcohol, no stronger wine. (At
this point I’m introducing a complication to winemaking). To fix this, we can
use a strand of yeast with high alcohol tolerance. Such strands of yeast can
withstand higher levels of alcohol and continue to convert sugar into alcohol.</p>
  </li>
  <li>
    <p>OK, we’ve got some strong wine, but now it tastes horrible. Let’s make it
sweeter. The obvious way to do this would be to simply add sugar to the wine
after it’s done fermenting. However, this won’t work because there will still
be yeast floating around in the fermented wine, and if we add more sugar to
it, the yeast will just turn it into more alcohol. (Notice how I’m building
off the concept of fermentation again. The difference between this example and
the last example though is that in the previous example, we used fermentation
to solve our problem and achieve a higher amount of alcohol, i.e. ABV, but in
this example, fermentation <em>is</em> the problem because we don’t want to increase
the ABV). To solve this problem, we can add chemicals to our wine to stop the
yeast from reproducing and prevent it from turning sugar into alcohol. Give
the chemicals a day to do work their magic, and then we can add as much sugar
as we want to our wine without worrying about our yeast turning it into more
alcohol. (Here I’ve solved the problem by adding just a tad more complexity,
with the introduction of chemicals to the winemaking process. Notice that I
did not mention the exact chemicals used. Depending on the talk, those details
may be unnecessary, or something I would like to push the audience to <a href="#anticipate-and-prepare-for-questions">ask
about</a>)</p>
  </li>
</ul>

<p>When preparing slides for your example, I recommend putting the most basic
example on one slide, and then adding “appear” animations to reveal the
complications. Here more than ever, it’s crucial that you use
<a href="#go-light-on-the-text-heavy-on-the-visuals">images</a> to introduce the
complications, and not text. Otherwise your example devolves into bullet points,
which are boring and annoying.</p>

<h3 id="provide-full-context-when-possible">Provide full context when possible</h3>

<p>Explain concepts as if your audience can’t remember anything but your key idea
(assuming you’ve already established it) and what’s visible on the current
slide. When speaking, always spell out acronyms and accompany technical terms
with their definitions. This is just another trick to reduce your audience’s
cognitive load. Save them the trouble of remembering what all your technical
jargon means so that they can focus on comprehending why it’s relevant to your
idea (and why your idea is relevant to them). Tread carefully here: although you
want to provide full context in your <em>talk</em>, you don’t want do this on your
<em>slides</em>. This is because cramming too much context on your slides results in
walls of text that impose a cognitive burden on your audience and distract them
from what you are saying.</p>

<p>There’s a very fine line between providing full context and providing too much
context. Give too little context, and your audience will be lost.
<a href="#get-to-the-point">Give too much</a>, and your audience will get confused. Either
way, they won’t understand you. To determine the right amount of context
necessary for your talk, you will need to…</p>

<h3 id="practice-practice-practice">Practice, practice, practice</h3>

<p>Practice presenting often and with a variety of people. Practice presenting both
to people who do and do not have any background knowledge on what you are
talking about. Feedback from the uninformed audience can help you realize when
certain ideas which seem “obvious” to are not actually that obvious to your
audience (and thus require that you <a href="#provide-full-context-when-possible">provide more
context</a> when mentioning them). The
informed audience can give you a way to practice answering technical questions
about the work. If you’re lucky to have a mentor with good presentation skills,
then ask them how to make the presentation more concise and engaging.</p>

<p>When I gave my ICSE 2024 practice talk to my mentors at the University of
Central Florida (UCF), I learned more about how to put a fun spin on my ideas
and how to better sell myself during my presentation. For example, since my
advisor Dr. Paul Gazillo had a lot of background knowledge on the work, he was
able to recommend a few analogies I could use to better explain my research. On
the other hand, the other UCF faculty who only had general computer science
knowledge but little knowledge about my work were able to give me more general
presentation tips.</p>

<p>Meanwhile, when I practiced presenting my talk to my (very patient) girlfriend,
I learned how to convey complex ideas more concisely. Since my girlfriend has
very little computer science knowledge, she was willing to ask more “obvious”
questions about concepts that I implicitly assumed the audience would
understand, but apparently did not explain clearly enough. By presenting to a
non-technical audience, I improved at expressing my thoughts with minimal
technical jargon. I also had to learn how to not get mired down in the details
of my work, since that would also confuse her. In short, I had to learn how to
present using the right amount of context. This often required me to…</p>

<h3 id="highlight-key-points-and-cut-out-the-rest">Highlight key points, and cut out the rest</h3>

<p>Emphasize the important, exciting, and surprising parts of your work, and omit
everything else from your main talk that doesn’t serve this purpose. When you
present a figure with data (be it a table, chart, graph, code snippet,
whatever), ask yourself “What do I want the audience to glean from this?” If
there’s something in the chart that you can highlight to make this point
clearer, then do it! Do not try and make your audience read your table and
figure out on their own why the data explains how much better your work is than
prior work. Most of them won’t even try to. Just tell them instead.</p>

<p>Once you determine what the key part of your figure is and have highlighted it,
then consider removing the rest of the figure entirely and just leaving the
highlighted information on the slide. This will help you <a href="#get-to-the-point">get to the
point</a> when explaining the importance of your data. If you
have highlighted multiple parts of the same figure, see if you can use some sort
of average (i.e., mean, median, or mode) to summarize this information, and
replace the figure with this average. Make sure to leave the full figure in
after the end of your presentation though, since this can help you <a href="#anticipate-and-prepare-for-questions">answer
questions</a>.</p>

<p>Sometimes it can be useful to present a large, complex figure to convey just
how complicated and difficult the problem you are solving is. If you do this, my
only advice is to do so quickly. You don’t want to risk your audience actually
trying to read/understand the figure and getting distracted or confused. You
just want to shock them with it, and then take it away before they can think too
hard about it. If they really want to know more, they’ll ask you about it.</p>

<h3 id="stick-the-landing">Stick the landing</h3>

<p>End your talk with a slide that reiterates your main points, presents your photo
along with your contact info, and displays QR codes or short links to pages
where the audience can learn more about your work. When you reach your
conclusion slide don’t read anything on it; simply tell the audience that you
have finished your talk and are ready to accept questions. Don’t proceed to a
“Thank you” or “Questions” slide, because they provide no meaningful content to
your presentation. The last slide of your talk is likely to get more screen-time
than any other and is your last chance to sell yourself to your audience, so
don’t squander it!</p>

<p>Leave your conclusion slide up on the screen while you await questions so that
the audience can record your contact info and read and re-read your key points.
The audience may use these key points to form questions, so <a href="#anticipate-and-prepare-for-questions">prepare
accordingly</a>. If you need to go to an
extra slide to answer a question that’s fine; just try to jump back to your
conclusion slide afterward.</p>

<p>Finally, make sure to finish your talk ON TIME. If you run out of time in the
middle of your presentation, simply stop and say “I had more slides prepared,
but I am out of time and so will end now.” Don’t ask your audience for
permission to continue - they will likely feel bad and say sure, but believe me
they won’t appreciate you for taking more of their time. This is especially true
if your talk is the last one before a coffee or lunch break.</p>

<h3 id="be-humble">Be humble</h3>

<p>Perhaps the most important advice I can give on how to improve at presenting is
to remain humble and accept the fact that the first few presentations you make
(even after armed with this advice) will likely suck. It can take weeks to
prepare an excellent talk, and you may need to throw out your first, second, and
third drafts before you arrive at something half-decent. That’s OK though - most
talks suck, so a half-decent talk is often all you need to stand out. Keep these
rules in mind when you attend your next presentation and you’ll see what I mean.
The good news is that as long as you pay attention to what you’re doing wrong,
and remain receptive to feedback, you will only continue to improve. It is
difficult, but not impossible, to make a great presentation that doesn’t suck.</p>]]></content><author><name></name></author><category term="blog" /><summary type="html"><![CDATA[My mentors have shown me what makes for a good presentation, and now I can’t un-see the fact that most presentations are just awful. After reading this post you won’t be able to un-see it either - and you’ll learn how to stand out from the crowd by making an excellent presentation instead.]]></summary></entry><entry><title type="html">My 2024 Vacation to Portugal</title><link href="https://pappasbrent.com/blog/2024/05/12/my-2024-vacation-to-portugal.html" rel="alternate" type="text/html" title="My 2024 Vacation to Portugal" /><published>2024-05-12T13:00:00+00:00</published><updated>2024-05-12T13:00:00+00:00</updated><id>https://pappasbrent.com/blog/2024/05/12/my-2024-vacation-to-portugal</id><content type="html" xml:base="https://pappasbrent.com/blog/2024/05/12/my-2024-vacation-to-portugal.html"><![CDATA[<p>After attending ICSE a few weeks ago, my girlfriend and I had a blast exploring
in and around Lisbon, Portugal. Here are all the places we visited, as well my
brief thoughts on them.</p>

<h2 id="museums">Museums</h2>

<ul>
  <li>
    <p>3D Fun Art Museum Lisboa: This museum had a few funny illusions that made for
some great photos (which are little too silly for me to want to share haha).</p>
  </li>
  <li>
    <p>Calouste Gulbenkian Museum: Tucked away in a library, this museum was replete
with artifacts from cultures all across the globe! We started exploring this
museum in the afternoon, but spent so long admiring everything that the museum
staff had to kick us out before we could finish walking through the last
section on European history.</p>
  </li>
  <li>
    <p>Mosteiro dos Jerónimos: Large and honestly somewhat imposing, this monastery
is one of the most conspicuous historical sites in Lisbon. It’s kept in great
condition though, partly because the city only lets people in to explore the
site in groups of about 30 people at a time. This also made it easier for my
girlfriend and I to get some great pictures, such as this one of the monastery
courtyard:</p>

    <div class="row row-centered">
<img src="/assets/img/mosterio-dos-jeronimos.jpg" alt="Courtyard of the Mosteiro Jerónimos" class="rounded-border" style="width: 312px;" />
</div>
  </li>
  <li>
    <p>Museu da Marioneta: This was actually the first museum we visited, and it was
very funny and quirky. At the end we got to watch the Portuguese stop motion
classic, <a href="https://youtu.be/m4Fciq8LPz0?si=BOT5RHJkdAnmoTkI"><em>A Suspeita</em></a>, and
it was great.</p>
  </li>
  <li>
    <p>The Museum of Art, Architecture, and Technology (MAAT): This museum presented
a look into the industrial history of Lisbon, and into global efforts towards
a more sustainable future.</p>
  </li>
  <li>
    <p>The Museum of the Orient: This was our favorite museum. There were just so
many artifacts (snuff bottles, paintings, dresses, suits of armor, and more!)
that we didn’t have time to appreciate it all before the museum closed and the
staff told us to leave. Oh well, on the bright side we got in for free because
we apparently came on International Day.</p>
  </li>
  <li>
    <p>The National Tile Museum: So many pretty tiles.</p>
  </li>
</ul>

<h2 id="historical-sites-and-landmarks">Historical sites and landmarks</h2>

<ul>
  <li>
    <p>Aqueduto das Águas Livres: Honestly the aqueduct is not that pretty to look
at, but it’s massive size is impressive, and its location above the highway
makes it a great spot for taking photos.</p>
  </li>
  <li>
    <p>Arco da Rua Augusta: Crowds of people were flocking to this iconic site when
we stopped by it, and there was even a guy with a bubble wand blowing a ton of
bubbles!</p>

    <div class="row row-centered">
<img src="/assets/img/arco-da-rua-augusta.jpg" alt="Me in front of the Arco da Rua Augusta" class="rounded-border" style="width: 256px;" />
</div>
  </li>
  <li>
    <p>Belém Tower: We didn’t go inside but the outside was cool. My girlfriend
painted a picture of the tower one day while I was at ICSE.</p>
  </li>
  <li>
    <p>Centro Cultural de Belém: This was the ICSE 2024 conference venue.</p>
  </li>
  <li>
    <p>National Palace of Pena: Pena Palace was probably the most beautiful
historical site we saw, and definitely the most colorful. This is because it
was built much more recently that the other historical sites (in the 1800s),
and also appears to be better well-maintained. To get to the palace, we
decided to forgo taking the bus and hiked up the mountain it is perched on
ourselves. This took about an hour, but the climb was worth it in the end.
Afterward we had a nice time exploring the palace gardens (although the
greenhouses were in disrepair), and finally took a semi-hidden side-trail to
climb back down the mountain. The whole experience felt somewhat magical.</p>

    <div class="row row-centered">
<img src="/assets/img/pena-palace.jpg" alt="Pena Palace" class="rounded-border" style="width: 256px;" />
</div>
  </li>
  <li>
    <p>Padrão dos Descobrimentos: An interesting monument to the discovery of America
and the New World. The cool thing about this monument is that one side of it
faces the East and the other faces the West, so if you go in the morning or
the evening, you may want to visit it again at the other time of day so that
you can see the sun shine on the other side.</p>
  </li>
  <li>
    <p>Palace Fronteira: We only explored the outside, and while it was somewhat
overgrown, the nautically-themed tiles were striking, and the maze-like garden
was good for a stroll.</p>
  </li>
  <li>
    <p>São Jorge Castle: This castle is almost worth visiting for the view alone. It
sits atop a high hill in Lisbon and thus offers a great view of the city. The
castle itself is honestly rather rugged, and the stairs are steep, slippery,
and narrow, but I think that gives you a better idea of what it must have been
like for the soldiers who resided here in the past. There’s also a museum next
to the castle that has some neat artifacts that are worth checking out.</p>
  </li>
  <li>
    <p>Sé de Lisboa: This immense church has many gorgeous sites, but a few things
that stick out to me are the pope’s dressing room (so much of it is gilded
that it almost looks like the room is made of gold), a large nativity scene
diorama, and the church’s huge pipe organ.</p>
  </li>
</ul>

<h2 id="nature-sites">Nature sites</h2>

<ul>
  <li>
    <p>Cabo da Roca: Absolutely breathtaking. We traveled to the westernmost point of
continental Europe after visiting Pena Palace, and the view totally blew me
away. The ocean seemed to stretch on endlessly toward the horizon, and the
waves undulated gently under the sun like a sleepy serpent. This was
definitely my favorite part of the trip.</p>

    <div class="row row-centered">
<img src="/assets/img/cabo-da-roca.jpg" alt="Ocean view at Cabo da Roca" class="rounded-border" style="width: 312px;" />
</div>
  </li>
  <li>
    <p>Parque da Pedra: This park was fun to explore. On the side of the park close
to the aqueduct, we saw some cute wooden scupltures!</p>

    <div class="row row-centered">
<img src="/assets/img/parque-da-pedra-wood-sculptures.jpg" alt="Wooden sculptures at Parque da Pedra" class="rounded-border" style="width: 256px;" />
</div>
  </li>
  <li>
    <p>Tropical Botanical Garden of Lisbon: The front of the park is nice but
unfortunately the rest of it was kind of run down and gross.</p>
  </li>
</ul>

<h2 id="restaurants-and-dining">Restaurants and dining</h2>

<ul>
  <li>
    <p>Encanto: This Michelin star vegan restaurant was fantastic. It was our first
time eating at a Michelin restaurant so I didn’t know what to expect. The
dinner turned out to be a nine-course meal, with many of the dishes being
one-bite wonders. I enjoyed every second of it. All the dishes were works of
art, and my girlfriend and I tried the non-alcoholic and alcoholic
drink-pairings, respectively. At the very end, our waiter even gave us a
wax-sealed letter listing the evening’s menu to help us remember our
experience.</p>
  </li>
  <li>
    <p>Legumi Sushi Vegan: This place was as true hidden gem. We dined here one night
early on in our trip, and tried the small sushi boat. It was delicious, and
the staff (who were very friendly!) even gave us a free dessert! On our last
day in Lisbon we returned here, ordered the large sushi boat, and managed to
finish it!</p>
  </li>
  <li>
    <p>Time Out Market: Part fish market, part farmer’s market, and part food court,
Time Out Market was quite a fun place to explore for an hour or two. We didn’t
even get a chance to explore upstairs, which from below appeared to be a plant
nursery.</p>
  </li>
  <li>
    <p>LX Factory: ICSE’s conference banquet was hosted at an empty warehouse in this
art district, and while it looked really cool, the banquet was uncomfortably
packed and the restaurant my girlfriend and I tried to escape to was
unfortunately overpriced. If we go back to Lisbon though I would like to check
this place out again; it looked really cool and I bet some of its other
restaurants are better.</p>
  </li>
</ul>

<h2 id="shopping">Shopping</h2>

<ul>
  <li>
    <p>Campo Pequeno: This mall was small and the upper floor was basically closed.
My girlfriend liked the exterior look of the place but honestly I was not that
impressed.</p>
  </li>
  <li>
    <p>Colombo Shopping Centre: This three-story mall was a lot of fun to walk
around. We purchased some nice apparel for cheaper than we would have in the
United States, and explored a cool home decorating store named <em>Area
Infinity</em>.</p>
  </li>
  <li>
    <p>Pingo Doce: I would be remiss not to mention one of Portugal’s biggest
supermarkets! We stopped in a few of these for snacks.</p>
  </li>
</ul>]]></content><author><name></name></author><category term="blog" /><summary type="html"><![CDATA[After attending ICSE a few weeks ago, my girlfriend and I had a blast exploring in and around Lisbon, Portugal. Here are all the places we visited, as well my brief thoughts on them.]]></summary></entry><entry><title type="html">ICSE 2024</title><link href="https://pappasbrent.com/blog/2024/04/28/icse-2024.html" rel="alternate" type="text/html" title="ICSE 2024" /><published>2024-04-28T13:00:00+00:00</published><updated>2024-04-28T13:00:00+00:00</updated><id>https://pappasbrent.com/blog/2024/04/28/icse-2024</id><content type="html" xml:base="https://pappasbrent.com/blog/2024/04/28/icse-2024.html"><![CDATA[<p>Last week, I visited the beautiful city of Lisbon, Portugal to present my first
lead-author paper, <em>Semantic Analysis of Macro Usage for Portability</em>, at
ICSE 2024.</p>

<p>I attended both the main conference and the CHASE workshop, and while most of
the speakers and their presentations did not impress me, there were a number of
exceptional presentations. Note that I wasn’t able to attend all the talks (that
would be impossible since multiple talks were begin given simultaneously
throughout the event), but out of the ones I attended, these were by far the
best:</p>

<ul>
  <li>
    <p><em>A Journey Into the Emotions of Software Developers</em> by Nicole Novielli: Dr.
Novielli gave the opening keynote for CHASE, and it did not disappoint. Her
insights into how developers’ emotions affect their productivity were both
interesting and intuitive. For example, in her research she has found that
developers tend to perform better when they’re happy and worse when they’re
sad. I am currently helping organize an HCI study on the biometrics of
developers while they debug code in group settings, and wanted to ask Dr.
Novielli for some advice, but unfortunately did not get the chance.</p>
  </li>
  <li>
    <p><em>The Surprising Implications of Realism for Human Factors Research</em> by Paul
Ralph: It was surprising that one of the CHASE keynotes was a talk about
philosophy! Dr. Ralph unfortunately arrived late, and his talk was initially
hampered by technical difficulties, but despite these issues he still managed
to give a captivating presentation. I honestly felt like I was listening to a
Jordan Peterson talk at certain points, and I am now definitely more curious
to learn more about Critical Realism and other schools of philosophy.</p>
  </li>
  <li>
    <p><em>Why People Contribute Software Documentation</em> by Deeksha M. Arya: This was
the first non-keynote talk at CHASE that really impressed me, and it ended up
being one of the best talks I saw at all of ICSE. The presentation was
carefully crafted with fun yet simple animations, and Deeksha had clearly
practiced giving her presentation beforehand. I had the chance to talk with
Deeksha after the talk and connected with her on LinkedIn, she was very nice!</p>
  </li>
  <li>
    <p><em>Code Impact Beyond Disciplinary Boundaries: Constructing A Multidisciplinary
Dependency Graph and Analyzing Cross-Boundary Impact</em> by Gengyi Sun: This talk
stood out to me because Gengyi managed to be funny while also explaining her
work very well.</p>
  </li>
  <li>
    <p><em>The Devil Is in the Command Line: Associating the Compiler Flags With the
Binary and Build Metadata</em> by Gunnar Kudrjavets: I’ll be honest, I’m not sure
if I liked this talk or not. On one hand, Gunnar’s slides consisted almost
entirely of text, which I normally don’t like because it divides my attention
between listening to the speaker and reading the text on the slides. On the
other hand, Gunnar is a clear communicator, and the text that was on her
slides was short, simple, and obviously relevant to what she was saying. I
didn’t really understand the point of this talk (can’t just export compiler
flags to a <code class="language-plaintext highlighter-rouge">compile_commands.json</code> file by using CMake or by intercepting a
build system with <code class="language-plaintext highlighter-rouge">bear</code> or <code class="language-plaintext highlighter-rouge">scan-build</code>?), but I am curious to attend more of
Gunnar’s presentations.</p>
  </li>
  <li>
    <p><em>Classifying Source Code: How Far Can Compressor-based Classifiers Go?</em> by
Zhou Yang: Zhou was a great and humorous presenter.</p>
  </li>
  <li>
    <p><em>An Ensemble Method for Bug Triaging using Large Language Models</em> by Atish
Kumar Dipongkor: Full disclosure, Atish and I are friends and work together in
the same lab at UCF. Personal bias aside, Atish is still an amazing presenter.
He is passionate about his work and can convey complex topics clearly and
concisely.</p>
  </li>
  <li>
    <p><em>Using an LLM to Help With Code Understanding</em> by Daye Nam: Daye’s
presentation was excellent. Similar to Deeksha’s presentation, Daye’s was
masterfully crafted and she presented her work very well.</p>
  </li>
  <li>
    <p><em>Predicting open source contributor turnover from value-related discussions:
An analysis of GitHub issues</em> by Jack Jamieson: I really liked Jack’s
presentation because it got straight to the point and was readily
comprehensible.</p>
  </li>
  <li>
    <p><a href="https://youtu.be/OU7kh0YX-Kk">My own talk!</a></p>
  </li>
</ul>

<p>I would recommend keeping an eye on all these presenters, and if you get the
chance to attend their future talks, go for it! I know I will :)</p>

<p>Finally, I realize that in this post I haven’t really described what I consider
to be a good presentation. That’s because I plan on dedicating a future post to
that topic. I plan to have that post out by the end of May, and when I do I’ll
link it here. Basically, all the talks I’ve listed in this post follow most, if
not all, of the tenets I’m going to outline in that future post.</p>]]></content><author><name></name></author><category term="blog" /><summary type="html"><![CDATA[Last week, I visited the beautiful city of Lisbon, Portugal to present my first lead-author paper, Semantic Analysis of Macro Usage for Portability, at ICSE 2024.]]></summary></entry><entry><title type="html">Solving the Grecian Computer</title><link href="https://pappasbrent.com/blog/2023/12/31/solving-the-grecian-computer.html" rel="alternate" type="text/html" title="Solving the Grecian Computer" /><published>2023-12-31T13:00:00+00:00</published><updated>2023-12-31T13:00:00+00:00</updated><id>https://pappasbrent.com/blog/2023/12/31/solving-the-grecian-computer</id><content type="html" xml:base="https://pappasbrent.com/blog/2023/12/31/solving-the-grecian-computer.html"><![CDATA[<p>My brother gave my a difficult puzzle for Christmas, so I wrote a program to
solve it for me.</p>

<h2 id="the-puzzle">The Puzzle</h2>

<p>Last Monday, Christmas morning, my brother gifted me with <em>The Grecian
Computer</em>, a puzzle created by Project Genius. Here is a picture of it:</p>

<div class="row row-centered">
<img src="/assets/img/grecian_computer_unsolved.png" alt="Unsolved Grecian
Computer puzzle" />
</div>

<p>The puzzle is made of wood and consists of five layered circles. The circles
have ridged edges, and four rings of 12 numbers each placed at evenly-spaced
intervals. The upper circles are smaller than the lower ones, and are
interspersed with gaps. The ultimate effect is that the puzzle resemebles a
cross between a gear and a clock.</p>

<p>The objective of the puzzle is to rotate the circles such that each column sums
to 42. I messed around with trying to solve the puzzle on my own for a few
minutes, but then realized that I could probably craft a program to solve it for
me. Turns out that I was correct!</p>

<h2 id="the-program">The Program</h2>

<p>I came up with the following brute-force algorithm to solve the puzzle: For each
possible rotation of each circle, check if pairing it with each possible
rotation of every other circle solves the problem. There may be a more efficient
way to solve this problem, but since the problem space is so small (there’s only
12 unique rotations of each circle, and five circles, for a total of 12^5 =
248832 solutions to check), any modern computer should be able to run a decent
implementation of this algorithm in a fraction of a second.</p>

<p>The tricky part is encoding the data in a machine-readable format. I toyed
around with a few different ideas, but ultimately decided to encode the puzzle
as 3-dimensional integer array. The first dimension corresponds to the number of
circles (five circles, with the lowest index corresponding to the bottom
circle), the second to the number of rings in each circle (four rings for each
circle, with the lowest index corresponding to the outer-most ring) and the
third to the numbers printed on each ring (12 numbers on each ring, beginning
with a number at an arbitrary rotation I chose when starting to encode the
puzzle and made sure to keep consistent until I finished). I inserted zeros
where rings contained gaps instead of numbers.  I implemented my solution in C
and defined a macro for each circle, so here’s an example of what this encoding
looks likes for the the top-most circle (i.e., layer):</p>

<div class="row row-centered">
<pre>
#define LAYER_FIVE                                                             \
  {                                                                            \
    {0}, {0}, {0}, { 0, 8, 0, 3, 0, 6, 0, 10, 0, 7, 0, 15 }                    \
  }
</pre>
</div>

<p>Notice that I encode the first three rings to zero-filled arrays. This is
because the top-most layer only contains numbers for the inner-most ring, so I
fill the rings that circle does not cover with zeros. Refer back to the above
image to see what I mean.</p>

<p>Now that we’ve encoded the data, we can implement the core puzzle-solving
algorithm:</p>

<div class="row row-centered">
<pre>
int solve(int layers[5][4][12]) {
  int l1, l2, l3, l4, l5;
  for (l1 = 0; l1 &lt; 12; l1++) {
    for (l2 = 0; l2 &lt; 12; l2++) {
      for (l3 = 0; l3 &lt; 12; l3++) {
        for (l4 = 0; l4 &lt; 12; l4++) {
          for (l5 = 0; l5 &lt; 12; l5++) {
            if (solved(layers)) {
              return 1;
            }
            rotate_layer_right(layers[4]);
          }
          rotate_layer_right(layers[3]);
        }
        rotate_layer_right(layers[2]);
      }
      rotate_layer_right(layers[1]);
    }
    rotate_layer_right(layers[0]);
  }
  return 0;
}
</pre>
</div>

<p>It’s just a deeply-nested for loop! At each loop, we check if the current
combination of rotated circles solves the problem, and if not, then try the next
combination of circle roations by rotating one of the circles to the right once.
I could write a more general solution using recursion and back-tracking, but I
prefer this simple (albeit it somewhat ugly) solution for now.</p>

<p>After reading the previous code snippet, you may have two questions: How do we
rotate a layer, and how do we check if the puzzle is solved? To rotate a layer,
we simply rotate each ring in the layer by one:</p>

<div class="row row-centered">
<pre>
void rotate_right(int *a, int n) {
  int i;
  int temp = a[n - 1];
  for (i = n - 1; i &gt; 0; i--) {
    a[i] = a[i - 1];
  }
  a[0] = temp;
}

void rotate_layer_right(int layer[4][12]) {
  int ring;
  for (ring = 0; ring &lt; 4; ring++) {
    rotate_right(layer[ring], 12);
  }
}
</pre>
</div>

<p>And to check if the puzzle is solved:</p>

<div class="row row-centered">
<pre>
int solved(int layers[5][4][12]) {
  int column_sum = 0;
  int ring;
  int layer;
  int column;
  for (column = 0; column &lt; 12; column++) {
    column_sum = 0;
    for (ring = 0; ring &lt; 4; ring++) {
      for (layer = 4; layer &gt; -1; layer--) {
        if (layers[layer][ring][column] &gt; 0) {
          column_sum += layers[layer][ring][column];
          break;
        }
      }
    }
    if (column_sum != 42) {
      return 0;
    } 
  }
  return 1;
}
</pre>
</div>

<p><code class="language-plaintext highlighter-rouge">solved()</code> checks that all columns in the puzzle sum to 42. Because some rings
may contain gaps instead numbers, when adding the value in a given column to the
running sum for that column, we only sum the first non-zero value we find in
that column to the running sum (the one in the top-most ring), and ignore the
rest (that’s what the <code class="language-plaintext highlighter-rouge">break</code> statement does).</p>

<h2 id="the-solution">The Solution</h2>

<p>After writing the above functions as well as some code for calling them and
printing the result, I successfully obtained the solution! So without further
ado, here’s the solved puzzle. Don’t look too hard at the following picture if
you want to try and solve the puzzle yourself :)</p>

<div class="row row-centered">
<img src="/assets/img/grecian_computer_solved.png" alt="Solved Grecian Computer
puzzle" />
</div>

<p>Feel free to check for yourself that all the columns sum to 42.</p>

<h2 id="conclusion">Conclusion</h2>

<p>This was a fun mental exercise! You can obtain a working version of the source
code <a href="/assets/code/c/grecian_computer.c">here</a>. I wonder if I could somehow turn
this into a competitive coding problem and submit it to a site such as LeetCode?
That would give me a reason to write a more general (and perhaps faster)
solution.</p>]]></content><author><name></name></author><category term="blog" /><summary type="html"><![CDATA[My brother gave my a difficult puzzle for Christmas, so I wrote a program to solve it for me.]]></summary></entry><entry><title type="html">365 Days of LeetCode</title><link href="https://pappasbrent.com/blog/2023/09/30/365-days-of-leetcode.html" rel="alternate" type="text/html" title="365 Days of LeetCode" /><published>2023-09-30T13:00:00+00:00</published><updated>2023-09-30T13:00:00+00:00</updated><id>https://pappasbrent.com/blog/2023/09/30/365-days-of-leetcode</id><content type="html" xml:base="https://pappasbrent.com/blog/2023/09/30/365-days-of-leetcode.html"><![CDATA[<p>I solved <a href="https://leetcode.com/">LeetCode</a>’s daily problem every day for 365
days straight.</p>

<h2 id="why">Why?</h2>
<p>I did this challenge to improve my problem-solving skills, and because I just
enjoy solving programming problems.</p>

<p>One of my life goals is to become a professor of Computer Science at an
accredited university. I don’t want to be an average professor though; I would
like to be an excellent professor. I’m talking about the kind of professor that
ignites in their students a passion for learning, and makes the road to success
clear. In order to explain my subject well, I feel I need a strong understanding
of it. Furthermore, having strong problem-solving skills will also help me solve
research problems, which professors also spend a great deal of time doing (or at
least trying to do). So by becoming an excellent programmer, I am one step
closer to becoming an excellent Computer Science professor.</p>

<p>The second reason I did this challenge is just for the fun of it. I enjoy
activities of the mind (e.g. reading, chess, and card games), and to me
programming problems are some of the most challenging mental activities. I’ve
been a little obsessed with them ever since undergrad, when I first realized
there was so much more to Computer Science than just writing code. Every time I
solve a problem, I get a little rush of serotonin, and a feeling that I am just
ever so slightly smarter than I used to be.</p>

<h2 id="how-did-it-go">How did it go?</h2>
<p>While I would like to say that I solved every problem on entirely my own, I will
admit that some of the medium and many of the harder problems stumped me, and I
had to refer to other existing solutions for help. I still tried my best to
solve every problem though, and would often invest an hour on a problem before
giving up and looking at another solution. On the bright side, reading other
peoples’ solutions introduced me to new problem-solving techniques and
approaches that I doubt I would have found if I were to only solve problems by
myself. Whether I solved a problem on my own or not, I was still learning the
concepts behind it, which to me is equally as important as solving it.</p>

<p>Speaking of which, I learned a lot! Over the course of the challenge, I got much
better at DP, and towards the end I found myself solving many more hard DP
problems on my own than I could when I began my quest. I also realized what my
strengths and weaknesses are: I really like graph problems, and really should
practice sliding window problems more 😅. Finally, I came to terms with the fact
that I won’t be able to intuit every solution on my own, and that it’s sometimes
better to ask for help rather than pit myself against an intractable problem for
hours on end. I was especially stubborn at the beginning of the challenge, and
would occasionally spend more than two hours on a difficult problem before
throwing in the towel. In retrospect, a better use of my time would have been to
give up after an hour, and spend the following hour trying to obtain a deep
understanding of the problem’s solution. Oh well, live and learn.</p>

<h2 id="conclusion">Conclusion</h2>
<p>Solving a LeetCode problem every day for a whole year was difficult, but I
prevailed. I didn’t always solve the problem on my own, but I tried my best to
learn from each problem whether I solved it by myself or not. By doing this
challenge, I not only learned new problem-solving skills, but also learned a bit
of humility. If this sounds interesting to you, I encourage you to try a smaller
version of this challenge! Maybe solve one problem a day for a month instead of
a year, or perhaps try GitHub’s <a href="https://adventofcode.com/">Advent of Code</a>
challenge in December.</p>

<p>Oh, and I also redeemed all the LeetCoins I earned by doing this challenge for a
free shirt! I’ll wear it with pride 😎</p>]]></content><author><name></name></author><category term="blog" /><summary type="html"><![CDATA[I solved LeetCode’s daily problem every day for 365 days straight.]]></summary></entry><entry><title type="html">Playing with Parsers</title><link href="https://pappasbrent.com/blog/2022/07/24/playing-with-parsers.html" rel="alternate" type="text/html" title="Playing with Parsers" /><published>2022-07-24T14:15:00+00:00</published><updated>2022-07-24T14:15:00+00:00</updated><id>https://pappasbrent.com/blog/2022/07/24/playing-with-parsers</id><content type="html" xml:base="https://pappasbrent.com/blog/2022/07/24/playing-with-parsers.html"><![CDATA[<!-- Abstract/Hook -->
<p>I wrote the same program in 10 different programming languages.
Here’s how the performance of the different implementations stack up to each other.</p>

<!-- Intro/Motivation -->
<h2 id="introduction">Introduction</h2>
<p>I like programming languages.
Their syntaxes, their semantics, the communities behind them - all these factors entice me to spend time learning different tools of the software engineering trade.
One approach I often take for learning new languages is to reimplement a program I’ve written in another language.
I like doing this because so long as the language paradigms are similar, I can use my prior implementation as a guide to writing the new one.
Moreover, once I feel more comfortable in the new language, I can refine its implementation to be more idiomatic.</p>

<p>A program that I like to reimplement the most often is a simple arithmetic expression parser.
About a week ago, I reimplemented this parser in Rust for practice, and suddenly wondered how the performance of this implementation would compare to ones in other languages.
I expected it would of course be faster than a Python implementation, but what about one in Go, or even Haskell?
I’m aware that studies such as the <a href="https://benchmarksgame-team.pages.debian.net/benchmarksgame/index.html">Benchmark Games</a> already exist, but I thought conducting my own study could be fun.
So I decided to go all in - I would implement the same program in all the languages I could, and then have them all race to the finish!</p>

<!-- Background
Arithmetic expression grammar
Operator precedence
How the lexer and parser work
  Explain Haskell
Testing
-->
<h2 id="background">Background</h2>
<p>I implemented my parsers in C, C#, C++, Go, Haskell, Java, Javascript, Python, Rust, and Typescript.
Here are the steps I took to do that, sans downloading all the necessary compilers/interpreters.</p>

<h3 id="the-arithmetic-expression-grammar">The Arithmetic Expression Grammar</h3>
<p>First, I designed a small language of arithmetic expressions.
Here is the full grammar in EBNF form:</p>

<div class="row row-centered">
<pre>
expr      =   addsub;
addsub    =   muldiv {('+' | '-') muldiv};
muldiv    =   neg {('*' | '/') neg};
neg       =   {'-'} parenint;
parenint  =   ('(' expr ')') | int;
int       =   digit{digit};
digit     =   0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9;
</pre>
</div>

<h3 id="operator-precedence">Operator Precedence</h3>
<p>Next I decided on the the operator precedence levels.
Here they are, from greatest precedence to least:</p>
<ol>
  <li>parenthesized expressions, integer literals</li>
  <li>unary negation</li>
  <li>multiplication, division</li>
  <li>addition, subtraction</li>
</ol>

<h3 id="the-lexer-and-parser">The Lexer and Parser</h3>
<p>Then, I wrote the lexers and parsers.
In all languages (except for Haskell, which I explain in the next subsection), I implemented the <a href="https://en.wikipedia.org/wiki/Lexical_analysis#Tokenization">lexer</a> as a function which takes a string as input and returns a vector/list/arraylist of <a href="https://en.wikipedia.org/wiki/Lexical_analysis#Token">tokens</a> as output.
The parser is just an <a href="https://en.wikipedia.org/wiki/Recursive_descent_parser#:~:text=In%20computer%20science%2C%20a%20recursive,the%20nonterminals%20of%20the%20grammar">LL(1) recursive descent parser</a>, and I implemented it as a class/struct with methods for recursively parsing a given sequence of tokens to an integer result<sup id="fn:1a"><a href="#fn:1b">1</a></sup>.
For more information, check out <a href="https://en.wikipedia.org/wiki/Compilers:_Principles,_Techniques,_and_Tools">Compilers: Principles, Techniques, and Tools</a>.
I wrote the first parser in Python, and based it off the one in chapter 2.19 of <a href="https://www.oreilly.com/library/view/python-cookbook-3rd/9781449357337/">The Python Cookbook, 3rd edition</a>.
I made it a point to only use libraries/packages/modules that each language ships with, so that the only difference in the parsers would be the implementation language.
For all the gory details, you can refer to <a href="https://github.com/PappasBrent/comparing-parsers">this GitHub repo</a> containing all code for this study.</p>

<h3 id="a-note-on-the-haskell-implementation">A Note on the Haskell Implementation</h3>
<p>For most of the implementations, I wrote the same program: a lexer and a LL(1) recursive descent parser.
I implemented the Haskell parser a little differently however, using <a href="https://youtu.be/RDalzi7mhdY">parser combinators</a>.
I did this for three reasons:
1) I wanted to practice writing parser combinators.
2) It felt like the more natural way to parse text in Haskell.
3) I wanted to see how my naive implementation using parser combinators stacked up to my naive implementation of a recursive decent parser in the other languages.</p>

<h3 id="testing">Testing</h3>
<p>After writing my parsers, I needed a way to test that they were correct.
One way to to do this would be to implement a set of test cases in each parser’s language.
That would be pretty tedious, however, so I decided to <a href="https://about.gitlab.com/topics/devsecops/what-is-fuzz-testing/">fuzz test</a> them instead.</p>

<p>To fuzz the parsers, I ran them all on a set of randomly generated test inputs, and compared their results to that of <a href="https://www.gnu.org/software/bc/manual/html_mono/bc.html">bc</a>.
These inputs consisted of a hundred, a thousand, ten thousand, a hundred thousand, and a million expressions.
I considered a parser as passing a test if its output for that test matched that of bc<sup id="fn:2a"><a href="#fn:2b">2</a></sup>.
All my parsers passed all my tests before I ran conducted my experiment.</p>

<p>I wrote a Python script to generate these test inputs.
Basically, it recursively generates binary and unary expressions in the grammar until it reaches a specified maximum nesting depth, and then just emits an integer.
To maintain operator precedence, it parenthesizes all subexpressions.
I used methods described <a href="https://www.cs.utah.edu/~regehr/yarpgen-oopsla20.pdf">in this paper</a> to ensure that I did not emit expressions which could introduce undefined behavior, such as divide-by-zero and integer overflow errors.
This little script was fun to write, so if you’d like to take a look at it please check out the file <code class="language-plaintext highlighter-rouge">gen_exprs.py</code> in the repo (the code is commented!).</p>

<!-- Experimental Setup
Hardware
Software
-->
<h2 id="experimental-setup">Experimental Setup</h2>
<p>I used the test inputs I described in the last section to evaluate my parsers.
I ran each parser on each input twice to warm up the cache, ran the parser five more times on the input, and then took the average of the five execution times as the result for that parser on that input.</p>

<h3 id="hardware">Hardware</h3>
<p>I developed my parsers and evaluated them on a Dell XPS 13 9310 (0991) Notebook with 16 GB of RAM and an 11th Gen Intel i7-1185G7 CPU clocked @ 3.00GHz.
I ran on all my tests and conducted my evaluation on a single core.</p>

<h3 id="software">Software</h3>
<ul>
  <li>Kernel: 5.14.0-1045-oem</li>
  <li>Operating System: Ubuntu 20.04.4 LTS x86_64</li>
</ul>

<h4 id="language-technology">Language Technology</h4>
<style type="text/css">
.tg  {border-collapse:collapse;border-spacing:0;}
.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
  overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
  font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-0pky{border-color:inherit;text-align:left;vertical-align:top}
</style>

<table class="tg">
<thead>
  <tr>
    <th class="tg-0pky">Language</th>
    <th class="tg-0pky">Compiler/Transpiler</th>
    <th class="tg-0pky">Interpreter</th>
  </tr>
</thead>
<tbody>
  <tr>
    <td class="tg-0pky">C</td>
    <td class="tg-0pky">gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0</td>
    <td class="tg-0pky">N/A</td>
  </tr>
  <tr>
    <td class="tg-0pky">C#</td>
    <td class="tg-0pky">Microsoft (R) Visual C# Compiler version 3.9.0-6.21124.20 (db94f4cc)</td>
    <td class="tg-0pky">Mono JIT compiler version 6.12.0.182 (tarball Tue Jun 14 22:29:01 UTC 2022)</td>
  </tr>
  <tr>
    <td class="tg-0pky">C++</td>
    <td class="tg-0pky">g++ (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0</td>
    <td class="tg-0pky">N/A</td>
  </tr>
  <tr>
    <td class="tg-0pky">Go</td>
    <td class="tg-0pky">o1.18.4 linux/amd64</td>
    <td class="tg-0pky">N/A</td>
  </tr>
  <tr>
    <td class="tg-0pky">Haskell</td>
    <td class="tg-0pky">The Glorious Glasgow Haskell Compilation System, version 8.10.7</td>
    <td class="tg-0pky">N/A</td>
  </tr>
  <tr>
    <td class="tg-0pky">Java</td>
    <td class="tg-0pky">javac 11.0.15</td>
    <td class="tg-0pky">openjdk 11.0.15 2022-04-19</td>
  </tr>
  <tr>
    <td class="tg-0pky">Javascript</td>
    <td class="tg-0pky">N/A</td>
    <td class="tg-0pky">Node v10.19.0</td>
  </tr>
  <tr>
    <td class="tg-0pky">Python</td>
    <td class="tg-0pky">N/A</td>
    <td class="tg-0pky">Python 3.8.10</td>
  </tr>
  <tr>
    <td class="tg-0pky">Rust</td>
    <td class="tg-0pky">rustc 1.61.0 (fe5b13d68 2022-05-18)</td>
    <td class="tg-0pky">N/A</td>
  </tr>
  <tr>
    <td class="tg-0pky">Typescript</td>
    <td class="tg-0pky">tsc 4.7.4</td>
    <td class="tg-0pky">Node v10.19.0</td>
  </tr>
</tbody>
</table>

<h2 id="results-and-observations">Results and Observations</h2>
<p>I’ve recorded all the raw results of my experiments in the file <code class="language-plaintext highlighter-rouge">results.csv</code> in this study’s GitHub repo.
You can <a href="https://github.com/PappasBrent/comparing-parsers/blob/main/table.csv">go here</a> to view it on Github, which lets you easily filter and search through it.
The time it took the parsers to run on 100, 1K, and 10K lines of input were comparable, so I graphed them in a bar chart:</p>

<div class="row row-centered">
<img src="/assets/img/100-1k-10k-resized.jpg" class="rounded mx-auto d-block" style="width: 750px" />
</div>

<p>The C/C++/Rust implementations were the fastest.
This is likely because these languages aren’t garbage collected, and have minimal runtime environments (basically just their standard libraries).
The C# implementation was also pretty fast on the 100 and 1K lines of input, but in the rightmost group you can see it start to slow down.
I’m not sure why C# is so fast on the smaller inputs; if anyone knows why and would be willing to share I’d appreciate it :)</p>

<p>Next up, we have the Haskell, Java, and Go implementations.
These languages are garbage-collected and have larger runtimes, so it makes sense that they would be a bit slower.</p>

<p>Finally, there are the Javascript, Python, and Typescript implementations.
I find the results for these implementations most intriguing.
Naturally, they are slower than the other implementations since they are interpreted and not compiled, but I’m amazed at how poorly the Python implementation scales.
It outperforms the Javascript and Typescript implementations on the smaller inputs, but its performance starts to fall off at 10K lines of input.
Meanwhile, the Javascript and Typescript implementations scale much better.
I think it is interesting to note that the Typescript implementation is just slightly faster than the Javascript one - the Typescript transpiler must write better Javascript than I do!</p>

<p>Now, let’s move on to the 100K lines of input.
I could still fit all the parser execution times for this file in a single bar chart, so here it is:</p>

<div class="row row-centered">
<img src="/assets/img/100k-resized.jpg" class="rounded mx-auto d-block" style="width: 750px" />
</div>

<p>The Python implementation continues to scale poorly.
I tried to improve its performance a few times by changing how the parser’s lookahead worked, but in the end I wasn’t able to salvage it.
Maybe if I tried a bit harder I could have found a way to get closer to the Javascript and Typescript implementations, but oh well.</p>

<p>Meanwhile, the Haskell implementation begins to really slow down as well.
I’m not entirely sure why this is.
It may because it operates on <code class="language-plaintext highlighter-rouge">String</code>s instead of <a href="https://hackage.haskell.org/package/bytestring"><code class="language-plaintext highlighter-rouge">ByteString</code>s</a>.
Or could be because the implementation is rather naive, and doesn’t make efficient use of memory.
In any case, the next time I decide to solve a parsing problem using functional programming, I’ll make use of an industrial-strength parser combinator library such as <a href="https://hackage.haskell.org/package/megaparsec">Megaparsec</a> or <a href="https://hackage.haskell.org/package/attoparsec">Attoparsec</a> instead of hand-rolling my own.</p>

<p>Finally, I graphed the parser execution times on the 1M lines of input.
I did not plot the execution times for Python and Haskell in this graph since they took so much longer than the other parsers to finish (13.4s and 72.5 sec, respectively).</p>

<div class="row row-centered">
<img src="/assets/img/1M-resized.jpg" class="rounded mx-auto d-block" style="width: 750px" />
</div>

<p>What I found most shocking here is how slow the Rust implementation is at this point.
Since Rust isn’t garbage collected, I would expected it to keep pace with C and C++.
Instead, it lags behind even the interpreted languages.
I though that surely I must be doing something wrong, so I did some digging and stumbled across <a href="https://renato.athaydes.com/posts/how-to-write-slow-rust-code.html">this article</a> by Renato Athaydes.
Apparently, Rust is <a href="https://ihatereality.space/02-you-would-not-use-filter-map/#conclusion">full of little idioms</a> that one must use if they wish to write optimal Rust code.
This disappoints me, because while these “rustisms” look elegant after you have seen them, they are unobvious.
Please check out the linked blog posts for examples.
Don’t get me wrong, I think it’s normal for a programmer to have to know certain language idioms in order to squeeze every last drop of performance out of their code.
What I don’t like about Rust, however, is that its idioms are not akin to ones you would use in other imperative languages (e.g., favoring static variables over pointers/references to improve <a href="https://gameprogrammingpatterns.com/data-locality.html">data locality</a> and reduce cache accesses).
So if you invest time in learning them, you may be able to write better Rust code, not more efficient code in another language.
More likely than not, these idioms will not translate well (if at all) to other languages.</p>

<h2 id="side-notes">Side Notes</h2>
<ul>
  <li>The code size of the C implementation is the greatest of all the parsers I wrote, and of course I ran into memory errors while implementing it :)</li>
  <li>Go forced me to write more verbose code and to pass error messages up the call stack, but it made it easier to do so.
I still ran into memory errors though.</li>
  <li>The Haskell implementation is by far the shortest, but some could see this as a downside since at times the code can be a bit terse.
Or just illegible if you’re not familiar with parser combinators 🤷‍♂️️.</li>
  <li>With the Javascript implementation I had fun trying to convert <code class="language-plaintext highlighter-rouge">-0</code> to <code class="language-plaintext highlighter-rouge">0</code> :-)</li>
  <li>Python was the easiest to implement, and I tried to refactor it a few times to make it faster.
In the end I gave up - Python is just <em>so</em> slow.</li>
  <li>Rust’s enum variants gave me an intuitive way to define tokens, and partly thanks to the language itself I did not encounter any memory errors.</li>
</ul>

<h2 id="bottom-line">Bottom Line</h2>
<ul>
  <li>If I want to write a quick script to process large text files, I may reach for TypeScript in the future instead of Python, since it provides the benefits of static typing <em>and</em> better performance.</li>
  <li>Naively implemented imperative parsers are likely to be faster than naively implemented functional parsers.</li>
  <li>Rust has a bit too many esoteric idioms for my liking.</li>
</ul>

<h2 id="future-work">Future Work</h2>
<p>These parsers aren’t perfect.
Honestly, they’re not even idiomatic in the languages they are implemented in.
It would be interesting to see another study comparing the performance of idiomatic parser implementations (or maybe popular parsing frameworks?) across languages.
I might do this in the future, but if anyone else wants beat me to it I’d be cool with that.</p>

<hr />
<p id="fn:1b">
<a href="#fn:1a">1</a>
Technically, I think it would be more accurate to call these parsers interpreters since they don't construct <a href="https://en.wikipedia.org/wiki/Abstract_syntax_tree">abstract syntax trees</a> out of their inputs, but I prefer to use the word parser because it's quicker to write and say.
</p>

<p id="fn:2b">
<a href="#fn:2a">2</a>
This means I am treating bc as the ground truth for my testing.
</p>]]></content><author><name></name></author><category term="blog" /><summary type="html"><![CDATA[I wrote the same program in 10 different programming languages. Here’s how the performance of the different implementations stack up to each other.]]></summary></entry></feed>