log in
← Back to Kevin's homepagePublished: 2021 March 7 I’ve spent the last year building keyboards, which has included writing firmware for a variety custom circuit boards. I initially wrote this firmware in Rust, but despite years of experience with that language I still struggled quite a bit. I eventually got my keyboards working, but it took an embarrassingly long time and wasn’t fun. After repeated suggestions from my much more Rust-and-computing-experienced friend Jamie Brandon, I rewrote the firmware in Zig, which turned out swimmingly. I found this quite surprising, given that I’d never seen Zig before and it’s a pre-1.0 language written by a fellow PDX hipster with basically just a single page of documentation. The experience went so well, in fact, that I now feel just as likely to turn to Zig (a language I’ve used for a dozen hours) as to Rust (which I’ve used for at least a thousand hours). This, of course, reflects as much about me and my interests as it does about either of these languages. So I’ll have to explain what I want from a systems programming language in the first place. Also, to explain why I struggled with Rust I’ll have to show a lot of complex code that I’m obviously unhappy about. My intent here is not to gripe about Rust, but to establish my (lack of) credibility: It’s so you can judge for yourself whether I’m using Rust’s features in a reasonable way or if I’ve totally lost the plot. Finally, while it risks falling into the dreadfully boring “language X is better than Y” blog trope, I feel that it’d be more helpful to some readers if I explicitly compare Rust and Zig, rather than write a wholly positive “Zig’s great!” article. (After all, I’d steadily ignored six months of Jamie gushing about Zig because, “that’s great buddy, but I already know Rust and I just want to get my keyboard done, okay?”) What I want from a systems language I was educated as a physicist and learned programming so I could make data visualizations. My first languages were PostScript and Ruby (dynamic, interpreted languages) and I later moved to JavaScript so I could draw on the web. That led me to Clojure (using ClojureScript to draw on the web), where I’ve spent much of my career. In 2017 I decided to learn a systems language. Partly this was intellectual curiosity — I wanted to become more familiar with concepts like the stack, heap, pointers, and static types which had remained mucky to me as a web developer. But mostly it was because I wanted the capabilities that systems languages promised: To write code that was fast; that could take advantage of how computers actually worked and run as fast as the hardware allowed. To write applications that could run in minimal environments like microcontrollers or web assembly where it just isn’t feasible (in time or space) to carry along a garbage collector, language runtime, etc. My interest was not (and still isn’t) in operating systems, programming language design, or safety (with respect to memory, formal verfiability, modeling as types, etc.). I just wanted to blink the litle squares on the screen on and off very quickly. Bas
(read more)
A few days ago someone tweeted a question asking which of the following PHP snippets was better than the others, or whether there might be an even better approach. I tweeted my answer in the following cryptic paragraph. Place the if/else cases in a factory object that creates a polymorphic object for each variant. Create the factory in ‘main’ and pass it into your app. That will ensure that the if/else chain occurs only once. Others have since asked me for an example. Twitter is not the best medium for that so… Firstly, if the sole intent of the programmer is to transla
(read more)
Posted: 2021-03-06 It seems to be widely accepted that creating a powerful, useful Emacs setup "by hand" is just too much trouble, and you should choose a "distro" like Doom Emacs. But is it really all so bad? If you go the route of "hand-made", will you suffer through endless nights of fixing your setup? The answer is: probably not, but read on for more details! 1 → Why Custom? I've got a lot of respect for the various Emacs "distros" out there, but my inclination for DIY as well as for understanding my tools keeps me from going that route. I'
(read more)
At the beginning of this decade, a few of us Haskellers were exploring how best to do client-side web programming. We didn’t want to write JavaScript. There’s a surprising number of techniques we tried to avoid doing so. There was work coming from academia and industry. Here’s a history of my own experience in this problem space. In 2008, Joel Bjornson and Niklas Broberg published HJScript, which was a Haskell EDSL for writing JavaScript. It had the ability to express typed JS via a GADT. I used it in 2010 on a project to make limited DOM manipulations. I wrote a wrapper around jquery, for example. It was nice to write in Haskell, but it was also mental overhead to write in two languages at once (it still had JavaScript’s semantics). In the end I went back to using plain JavaScript. Around 2010, Adam Chlipala announces Ur, a radical web dev language with row types, which compiles to both native object code and JavaScript, fairly transparently, embedding both HTML and SQL syntax into the language. I am both impressed by the simplicity of the code and correctness, and horrified by some of the code involving metaprogramming. The row types documentation frankly scares me away. After trying out some examples, I don’t return to it.1 To this day I am still interested in this architecture. Some time in 2011, Opa appears, but apparently nobody wants to learn yet another server-side language. I don’t know anyone who has used this in production. In August 2011, I was experimenting with GHCJS, notes which later I copied over to the Haskell Wiki under the name The JavaScript Problem. At the time, I encountered bugs and runtime difficulties with some simple GHCJS experiments. From there I mostly abandoned it as a choice, underwhelmed. In December 2011, I came up with ji, a means of controlling a web page from a Haskell web service, which later was renamed to threepenny-gui and is now maintained by Heinrich Apfelmus to this day. It turned out to be extremely powerful; I wrote an IRC-like app in which people could chat on a web page in a page of code. However, in the end it wasn’t to be for mainstream web dev; a good point that Michael Snoyman made was that it had a high server-side cost, and did not scale to multiple servers. In the end, threepenny-gui is a great library to consider for cross-platform desktop programs (such as with Electron). In January 201
(read more)
The results of this experiment are not exactly close to my target as you can see, but I thought it was worth a blog post anyway. There was this rough idea I’ve been thinking about in Conway’s Game of Life for a really long time. I wonder if it's possible to use some kind of stochastic algorithm that gives you an initial state which forms legible text after many cycles.— yakinavault (@yakinavault) August 7, 2020 I came across an article of the same title by Kevin Galligan recently and I thought I could do something similar using a different approach. What if instead of using SAT Solvers, I use some kind of heuristic algorithm that could somehow “program” a large world of Game of Life to display an image after a few generations? There are other ways of achieving this. One is by placing still life states at specific pixels as described in this codegolf question. What I’m thinking of is to display Mona Lisa for a single frame/generation of ‘non-still’ Game of Life. I began working on a proof of concept using the hill climbing algorithm. The idea was very simple. Iteratively modify a random 2D Game of Life state until it’s Nth generation looks similar to Mona Lisa. Here’s the full algorithm. best_score := infinity target := mona lisa with dimensions m x n canvas := random matrix of m x n best_result := canvas do modified_canvas := Copy of canvas with a single random cell inverted nth_modified_canvas := Run N generations of Game of Life modified_canvas Compute a score of how close nth_modified_canvas is with target if score < best_score then best_score := score best_result := modified_canvas canvas := best_result while(max_iterations limit passed or best_score < threshold) I hacked up a single core prototype. def modify(canvas, shape): x,y = shape px = int(np.random.uniform(x+1))-1 py = int(np.random.uniform(y+1))-1 canvas[px][py] = not canvas[px][py] return canvas def rmse(predictions,targets): return np.sqrt(np.mean((predictions-targets)**2)) while best_score>limit: canvases = np.tile(np.copy(best_seed), (batch_size, 1, 1)) rms_errors = [] for canvas in range(len(canvases)): canvases[canvas] = modify(states[state], (m,n)) rmse_val = rmse(target, nth_generation(np.copy(canvases[canvas]))) rms_errors.append(rmse_val) lowest = min(rms_errors) if lowest < best_score: best_score = lowest best_result = canvases[rms_errors.index(lowest)] Hill Climbing works by finding the closest neighboring state to a current state with the least error from a ‘target_state’ (Mona Lisa). The way I find the closest neighbor in every step is to create a copy of the best solution we have so far and invert a random cell. This change is small enough that we don’t risk stepping over any local minima. Also we use root mean square error metric to compare the best state and the target. Other error metrics can be experimented with, but for this problem, I found that RMSE was sufficient. After a few days of CPU time(!), I was able to obtain something that resemble
(read more)
We all think of the CPU as the "brains" of a computer, but what does that actually mean? What is going on inside with the billions of transistors to make your computer work? In this four-part mini series we'll be focusing on computer hardware design, covering the ins and outs of what makes a computer work. The series will cover computer architecture, processor circuit design, VLSI (very-large-scale integration), chip fabrication, and future trends in computing. If you've always been interested in the details of how processors work on the inside, stick around because this is what you want to know to get started. We'll start at a very high level of what a processor does and how the building blocks come together in a functioning design. This includes processor cores, the memory hierarchy, branch prediction, and more. First, we need a basic definition of what a CPU does. The simplest explanation is that a CPU follows a set of instructions to perform some operation on a set of inputs. For example, this could be reading a value from memory, then adding it to another value, and finally storing the result back to memory in a different location. It could also be something more complex like dividing two numbers if the result of the previous calculation was greater than zero. When you want to run a program like an operating system or a game, the program itself is a series of instructions for the CPU to execute. These instructions are loaded from memory and on a simple processor, they are executed one by one until the program is finished. While software developers write their programs in high-level languages like C++ or Python, for example, the processor can't understand that. It only understands 1s and 0s so we need a way to represent code in this format. Programs are compiled into a set of low-level instructions called assembly language as part of an Instruction Set Architecture (ISA). This is the set of instructions that the CPU is built to understand and execute. Some of the most common ISAs are x86, MIPS, ARM, RISC-V, and PowerPC. Just like the syntax for writing a function in C++ is different from a function that does the same thing in Python, each ISA has a different syntax. These ISAs can be broken up into two main categories: fixed-length and variable-length. The RISC-V ISA uses fixed-length instructions which means a certain predefined number of bits in each instruction determine what type of instruction it is. This is different from x86 which uses variable length instructions. In x86, instructions can be encoded in different ways and with different numbers of bits for different parts. Because of this complexity, the instruction decoder in x86 CPUs is typically the most complex part of the whole design. Fixed-length instructions allow for easier decoding due to their regular structure, but limit the number of total instructions that an ISA can support. While the common versions of the RISC-V architecture have about 100 instructions and are open-source, x86 is proprietary and nobody really knows how many instructions there are. People generally believe there are a few thousand x86 instructions but the exact number i
(read more)
Introduction Recently, I’ve been thinking through the implications of building an authentication system. The amount of work to successfully pull of what amounts to a Boolean decision is staggering. One of the more controversial parts of authentication is proper handling of passwords. At this point it’s common knowledge that passwords should be hashed, but the how is still very much up for debate. When I work through designing security software, I try to lean on recommendations from the community. I do this because when it comes to security, rising tides raise all boats. If the recommendations are reasonable, it’s a great way to back up your decision. If they aren
(read more)
Example Code // linked list data type and map operation def main() := ffi( "console.log", head(map(fun(x) -> add(x, 1), list)) ) type List a = | Nil {} | Cons { head : a, tail : List a } end def list := Cons { head = 1, tail = Cons { head = 2, tail = Nil {} } } def head(list) := case list of | Cons { head = head } -> head end def map(f, xs) := case xs of | Nil {} -> Nil {} | Cons { head = x, tail = rest } -> Cons { head = f(x), tail = map(f, rest) } end Features ✓ First-class Functions ✓ Algebraic Data Types ✓ Pattern Matching ✓ Extensible Records ✓ Static Typing & Type Inference ✓ Written in Haskell ✓ Targets JavaScript More info he
(read more)
UI layouts are always a hassle. Whatever layouting system I've made, I was never happy with it 100%. Some lacked simplicity, others lacked control. Recently I came back to a method I call RectCut. It is simple, and it gives you control for very complex layouts.You might have guessed by now that RectCut is based around cutting rectangles. And starts with... well rectangle:struct Rect { float minx, miny, maxx, maxy; }; Second part is four basic functions to cut it:Rect cut_left(Rect* rect, float a) { float minx = rect->minx; rect->minx = min(rect->max.x, rect->minx + a); return (Rect){ minx, rect->miny, rect->minx, rect->maxy }; } Rect cut_right(Rect* rect, float a)
(read more)
�_o�s����V6�yq:E�ȯD.U��,I�j�Q������s�c����F���p�)���[D�8�>qdg� ũDF1�!Kr�b�/ �K�^<�P]|�%7|#���6�e�a���V9”m��r.Dx=� kҏ ֲ�������ɨ\s�I��ZtN2Ёa ��~���;��a ߡ-����U�E�(�3ߘ�)ʣ���H�S�~`�k�+ʢ��������-��T�d;.����Qٔ�ef40_0]r5����G t�r}K?i1c� ��S��x�u�:i,�s� Q��n*_�B���ǓѲs����*��k� Zw��^F��W̗*�@5%s�R���.f����� ��f%���I��)W��*#�E)tDZ 7��ϲ
(read more)
OwnerAndy HerrickTypeFeatureScopeSEStatusCandidateReleasetbdComponentclient-libs / java.awtDiscussionawt dash dev at openjdk dot java dot netEffortSDurationSRelates toJEP 289: Deprecate the Applet APIReviewed byAlexander Matveev, Kevin Rushforth, Philip Race, Sergey BylokhovCreated2020/11/10 15:54Updated2021/03/06 00:09Issue8256145Summary Deprecate the Applet API for removal. It is essentially irrelevant since all web-browser vendors have either removed support for Java browser plug-ins or announced plans to do so. Description Deprecate, for removal, these classes and interfaces of the standard Java API: java.applet.Applet java.applet.AppletStub java.applet.AppletContext java.applet.Au
(read more)
This series will be about the video game Jurassic Park Trespasser. My goal is to be able to play the game by rewriting its source code in Rust based on the original C++ code base. Hopefully discovering interesting things along the way. Why this project? Not sure why I like this game, I only played it properly once, never finished it but I was amazed by the technical aspects of it. Even with all the technical issues, I’m sure all the fans will agree that being chased by a clumsy raptor while trying to wrangle a rifle with one hand is an experience that only this game can give. For this project, I decided to go back to Rust after trying it for Advent o
(read more)
Visual design updates that are more than skin deep When a new major version of some piece of software is released, there is often an immediate focus put on visual changes. If there aren’t a ton of new and shinies, social media will inevitably be filled with words like “stale,” “old,” and “outdated.” This has become especially true for elementary OS, whose visual design hasn’t really changed all that much over the years. At elementary, we tend to avoid making changes for the sake of change. We’re very skeptical about design trends, and do our best to create things that feel a bit more “evergreen.” After all, “Good design is long lasting” and this allows us to focus more on refining than constantly reinventing. We also have a third-party developer community to think about, and making sweeping visual changes means that the nearly 200 apps in AppCenter will have to be updated and tested to make sure they still look as intended. So, when we decided to work on the look and feel for elementary OS 6, we wanted to approach things with a lot of intentionality, avoiding trends and focusing on setting the stage for the next several years. App developers rely on pre-made widgets to do a lot of the heavy lifting and provide good default styles when making their apps. In addition to the widgets provided by GTK, we also ship our GTK companion library Granite that makes replicating common elementary design patterns a breeze. In elementary OS 6, we’re also making heavy use of Handy—a library that was originally developed by Purism for mobile interfaces but has now become a core part of the GNOME app development platform on the desktop. Thanks to Handy, we have two major, obvious visual design improvements that developers can adopt. We’ve long had plans to modernize the Granite Avatar widget. A continual problem we’ve faced is that many people just don’t set an avatar for their user account. As a consequence, we need a more meaningful fallback design that allows avatars to be distinct and useful in apps like Mail or in System Settings. As it turns out, the folks behind Han
(read more)
Memory corruption to root privileges. Recently I have taken an interest in a project called SerenityOS. Stolen straight from the GitHub: SerenityOS is a love letter to ’90s user interfaces with a custom Unix-like core. It flatters with sincerity by stealing beautiful ideas from various other systems. Roughly speaking, the goal is a marriage between the aesthetic of late-1990s productivity software and the power-user accessibility of late-2000s *nix. This is a system by us, for us, based on the things we like. It’s a surprisingly featured hobby operating system with quite a welcoming community behind it. It caught my radar after a series of videos by LiveOverflow and Andreas (the developer) detailing a few exploits made during the 2020 hxp CTF, so I decided to explore the system myself. Eventually, I found a memory corruption bug in some networking code that could be leveraged into kernel-mode code execution. The vulnerability can be hit so easily by some bad code that I’m surprised it wasn’t found by a fuzzer immediately. At its core, it’s a stack overflow in the function TCPSocket::send_tcp_packet. To see how, let’s take a look at the implementation. KResult TCPSocket::send_tcp_packet(u16 flags, const UserOrKernelBuffer* payload, size_t payload_size) {
(read more)
This post, previously titled “Thirty Years On” appeared on another incarnation of my blog 10 years ago. I am being lazy nostalgic and re-posting it today as it’s the 40th birthday of my first computer, the diminutive Sinclair ZX81. On Christmas day 1981 I awoke with the usual excitement of any 9 year old boy. I clearly remember going downstairs and being told not to go into the lounge because my Dad was busy setting up my main Christmas present. In those days we’d get a main present and some other smaller presents. My parents weren’t well off, we lived in a typical 3 bedroom semi in Southern England and got by as best we could. After breakfast in the kitchen we were eventually allowed to go into the lounge to open some presents. What greeted me was the device that propelled me into the world of computing. My parents has bought me a Sinclair ZX81. The reason we weren’t allowed into the lounge was because my Dad had got up early to go and set it up, connecting it to the family TV. He spent most of the early morning typing in some code from a manual or magazine (I forget which) so I’d have something to play with right away. I remember with great fondness spending much of the day, and following year playing with my very first computer. I would avidly buy magazines and type in the listings. I’d borrow books from my local library and interpret the TRS-80 or other generic BASIC programs into something my little ZX81 could do. The family got sick of me monopolising the main TV in the house, and eventually got hold of an old one for me to use in the kitchen. I spent much of my pre-teen years sat on a stool in the kitchen about 3 inches from a 23" TV on the kitchen breakfast bar, with my ZX81 on a shelf below. My brother and sister would have friends round and I was pretty much always there, typing in some code or trying to get something to load from a tape cassette. Such happy days. I’d frequently be amazed at the raw computing power in my hands. One day I had to go to my Dads work because school was closed. I took my ZX81 with me and wrote a dating application. It stored vital details about individuals in a database and could fi
(read more)
This post is half a gentle nudge that you should be using GitHub::Result more often and half a continuation of my Resilience in Ruby and Limit Everything: Timeouts for Shell Commands in Ruby posts. You can read this post and get value without reading those, but if you really want to dig in, I'd read them first.Adding timeouts to Speaker Deck's shell commands (as discussed in the aforementioned limit everything post) was a great start. But that wasn't enough. The code was not re-usable and definitely not aesthetically pleasing. Last night I watched Semicolon & Sons Rails Best Practices video (it's great, you should watch it). One note that stuck with me was:Programmers like me who work on a large variety of projects can greatly improve their productivity by writing their code in a way that it can be copied and pasted into another project and, more or less, just work.I think that sums up this next step in resilience for shell commands – improving ease of reuse. In this case, I'm talking ease of reuse in the same project. But most of this is generic enough it could easily be reused on another project as Jack advocated for.The FoundationFirst, let's start with the foundation, from which all the other commands are built. It's pretty simple:module SpeakerDeckCommand class Error < StandardError; end class Timeout < Error; end module_function def call(command, options = {}) GitHub::Result.new { child = POSIX::Spawn::Child.build(*command, options) begin child.exec! rescue POSIX::Spawn::TimeoutExceeded raise Timeout, child.inspect end if child.success? child else raise Error, child.inspect end } end end Sure, this should be generalized to Command or ShellCommand, but this simple layer that wraps each POSIX::Spawn command in a GitHub::Result makes chaining shell commands fantastically smooth. If you are new to GitHub::Result, you might be fine reading this, but not quite sure how to use it. Let's read through a few examples quick so you are up to speed.Calling a command returns a GitHub::Result: SpeakerDeckCommand.ca
(read more)
In this talk we will publish our research we conducted on 28 different AntiVirus products on macOS through 2020. Our focus was to assess the XPC services these products expose and if they presented any security vulnerabilities. We will talk about the typical issues, and demonstrate plenty of vulnerabilities, which typically led to full control of the given product or local privilege escalation on the system. At the end we will give advice to developers how to write secure XPC services.
(read more)
Wave Computing has emerged from bankruptcy proceedings with the surprise news that it’s taking the name of its MIPS subsidiary and moving to the free and open-source RISC-V ISA for future processor IP. Wave Computing dipped its toes in the free and open-source silicon movement back in 2018, announcing that its subsidiary MIPS Tech, acquired from Imagination Technologies in June that year, would provide 32- and 64-bit versions of the MIPS instruction set architecture (ISA) and full licences to its MIPS-related patent portfolio free of licensing fees and royalty payments. “We invite the worldwide community to join us in this exciting journey,” MIPS IP president Art Swift said at the time, “and look forward to seeing the many MIPS-based innovations that result.” In March
(read more)
Machine learning (ML) has been around conceptually since 1959, when Arthur Samuel, a pioneer in the field of computer gaming and artificial intelligence, coined the term. Samuel said that machine learning “gives computers the ability to learn without being explicitly programmed”. While at IBM he wrote a program to play Checkers, which became the first known self-learning program.Machine learning falls under the umbrella of artificial intelligence (AI). ML enables computer algorithms to improve automatically through experience and by processing large amounts of data.Sample data, known as training data, is used by machine learning algorithms to build a model. The training data enables the ML algorithms to find relationships and patterns, generate conclusions, and determine confidence scores. ML is used in image recognition, aberrant compute and network behavior, and spam recognition.An Introduction to Training and InferenceTrainingThe training process creates machine learning algorithms, in which the ML application studies vast amounts of data to learn about a specific scenario. Training uses a deep-learning framework, such as Google TensorFlow, PyTorch, or Apache Spark.Training
(read more)
 Home > Hardware > Review: Blackbird Secure Desktop – a fully open source modern POWER9 workstation without any proprietary code There’s a spectrum of openness when it comes to computers. Most people hover somewhere between fully closed – proprietary hardware, proprietary operating system – and partly open – proprietary hardware, open source operating system. Even if you run Linux on your AMD or Intel machine, you’re running it on top of a veritable spider’s web of proprietary firmware for networking, graphics, the IME, WiFi, BlueTooth, USB, and more. Even if you opt for something like a System76 machine, which has open firmware as a BIOS replacement and to cover some functions like keyboard lighting, you’re still running lots of closed firmware blobs for all kinds of components. It’s virtually impossible to free yourself from this web. Virtually impossible, yes, but not entirely impossible. There are options out there to run a machine that is entirely open source, from firmware all the way up to the applications you run. Sure, I can almost hear you think, but it’s going to be some outdated, slow machine that requires tons of tinkering and deep knowledge, out of reach of normal users or people who just want to buy a computer, take it out of the box, and get going. What if I told you there is a line of modern workstations, with all the modern amenities we’ve come to expect, that is entirely open? The instruction set, the firmware for the various components, the boot environment, the operating system, and the applications? No firmware blobs, no closed code hiding in var
(read more)
(disclaimer: I talked about this in private with Ben but it deemed necessary to post that publicly) (TL;DR: We need to avoid pleasing every possible group of users at once, and as such the better we can figure out our target group, the better the end result will be, that is why we need to split some of the changes we would all like to see between base and a new prelude) At the moment, what the Prelude is trying to achieve is unclear beyond the simple “default exports”. It does not particularly help neither library writers, beginners, compiler engineers, nor application developers. As a consequence, we have seen many alternative preludes trying to address the concerns of those categories. We have the preludes that pride themselves as being an application development framework, with effect tracking, logging, database, etc), those who provide better data-structures and re-exports other packages, and so on. As such, I think it is important to have this discussion about a prevalent alternative prelude under the prism of “Which category do we aim to serve?”. The library writers certainly may not want to have mtl and transformers re-exported! But they want head :: NonEmpty a, which is also something that would benefit the beginners! The application developers may, on the other hand, need to have a more off-the-shelf access to ReaderT and Text-first environments, as well as having Vector and HashMap coming in handy. Now, the reader may think “Well, isn’t that w
(read more)
Psyche-C is a rather unique compiler frontend for the C programming language. The capabilities of Psyche-C that make it “special” are described in the project’s README. The primary goal of Psyche-C is to support the implementation of static analysis tools, but it may also be used as an ordinary parser through the cnippet driver adaptor. Resources and documentation about Psyche-C may be found in its open source repository, except for those related to its distinguished type inference engine, which are the reason for this website. Type Inference for C Comming soon…
(read more)
IGNORE THE COUNTER ABOVE, the timeframe is from March 6 to March 14 IN YOUR TIMEZONE, pick a 7 days block to work within these dates.  2021's 7DRL Challenge will be 7DRLx17. Create a complete roguelike game in 7 days! read on for details. Roguelikes are traditionally procedurally generated RPGs in the mold of Rogue, with turn-based interactions in a grid-based environment, where levels are procedurally generated each play-through and death is permanent.  In 2005, the roguelike community established a yearly event, the 7DRL Challenge, in which developers are challenged to create a roguel
(read more)
Go modules are a fundamentally misguided and harmful change in the design of the Go ecosystem. I decline to adopt them or to use software which requires use of them. Origin of the Cancer The origin of the misguided Modules proposal appears to be this series of blog posts by rsc. “We need to add package versioning to Go.” I reject this premise. “Versioning will let us enable reproducible builds,” False premise. Reproducible builds are already wholly feasible without “modules”. “, so that if I tell you to try the latest version of my program, I know you're going to get not just t
(read more)
There was a recent discussion among my social group about what “getting dramatically better as a programmer” means. Based on that discussion, I’ve decided to share my own approach to becoming a “dramatically better programmer”. I want others to understand what practices I’ve found useful, so they can incorporate them into their own life. My approach to getting dramatically better is built around a training regime. There are a specific set of “exercises” I do every week. I designed the training regime with two explicit goals in mind: Learning how to solve problems I didn’t know how to solve before. Learning how to write correct programs faster. My training routine consists of a total of four different exercises. Each one helps me achieve the two objectives abov
(read more)
Appendix A The Tanenbaum-Torvalds Debate What follows in this appendix are what are known in the community as the Tanenbaum/Linus "Linux is obsolete" debates. Andrew Tanenbaum is a well-respected researcher who has made a very good living thinking about operating systems and OS design. In early 1992, noticing the way that the Linux discussion had taken over the discussion in comp.os.minix, he decided it was time to comment on Linux. Although Andrew Tanenbaum has been derided for his heavy hand and misjudgements of the Linux kernel, such a reaction to Tanenbaum is unfair. When Linus himself heard that we were including this, he wanted to make sure that the world understood that he holds no animus towards Tanenbaum and in fact would not have sanctioned its inclusion if we had not been a
(read more)
In this writing I aim to complete a Fizzbuzz without if statements, conditionals, pattern matching or even using modulus calculations. And if that isn’t enough I thought I’d use the opportunity to explore Haskell.The idea originated in the Friday lunchtime “Curry Club” at HMRC Digital where a few like-minded software engineers are getting together to teach themselves Haskell. (For those not in on the joke, the language is named after the logician Haskell Curry). At one of those sessions, talking about ifs and conditionals the challenge was posited that a Fizzbuzz can be done without ifs.A Fizzbuzz test is a fairly common programming challenge, often used to evaluate a developer’s skill level. The basic instructions are as followsWrite a class that produces the following for any c
(read more)
Recently I joined bevuta IT, where I am now working on a big project written in Clojure. I'm very fortunate to be working in a Lisp for my day job! As I've mostly worked with Scheme and have used other Lisps here and there, I would like to share my perspective on the language. Overall design From a first view, it is pretty clear that Clojure has been designed from scratch by (mostly) one person who is experienced with Lisps and as a language designer. It is quite clean and has a clear vision. Most of the standard library has a very consistent API. It's also nice that it's a Lisp-1, which obviously appeals to me as a Schemer. My favourite aspect of the language is that everything is designed with a functional-first mindset. This means I can program in the same functional style as I ten
(read more)
Functional Parsing - Computerphile - YouTube
(read more)