We are excited to release 0.5.0 of swift-parsing, our library for turning nebulous data into well-structured data, with a focus on composition, performance, and generality. This release brings a new level of ergonomics to the library by using Swift’s @resultBuilder machinery, allowing you to express complex parsers with a minimal amount of syntactic noise. Parsing before today Up to today, the parsing library leveraged a method-chaining, fluent style of parsing by using .take and .skip operators for running one parser after another and choosing whether you want to keep a parser’s output or discard it. For example, suppose we wanted to parse a string of data representing users: let input = """ 1,Blob,true 2,Blob Jr.,false 3,Blob Sr.,true """ And we wanted to parse that data into a more structured Swift data type, such as an array of user structs: struct User { var id: Int var name: String var isAdmin: Bool } We could construct a User parser from some of the parsers the library comes with by piecing them together using .take and .skip. For example, we can consume an integer from the beginning of the string, then consume a comma and discard its output using .skip, then consume everything up until the next comma (for the name) using .take, then consume a comma again, and then finally consume a boolean. Once the integer, string, and boolean have been extracted from the string we can .map on the parser to bundle it up into a User struct: let user = Int.parser() .skip(",") .take(Prefix { $0 != "," }) .skip(",") .take(Bool.parser()) .map { User(id: $0, name: String($1), isAdmin: $2) } And then finally we can use the the Many parser combinator for running the user parser as many times as possible in order to accumulate the users into an array: let users = Many(user, separator: "\n") Running this parser on the input string produces an array of users and consumes the entire input, leaving only an empty string: users.parse(&input) // [User(id: 1, name: "Blob", admin: true), ...] input // "" Parsing with builders The introduction of @resultBuilders to the library does not fundamentally change how you approach your parsing problems, but it doe
(read more)
If you've played around with electronic circuits, you probably know the 555 timer integrated circuit,1 said to be the world's best-selling integrated circuit with billions sold. Designed by analog IC wizard Hans Camenzind2, the 555 has been called one of the greatest chips of all time. An 8-pin 555 timer with a Signetics logo. It doesn't have a 555 label, but instead is labeled "52B 01003" with a 7304 date code, indicating week 4 of 1973. Photo courtesy of Eric Schlaepfer. Eric Schlaepfer (@TubeTimeUS) recently came across the chip above, with a mysterious part number. He tediously sanded through the epoxy package to reveal the die (below) and determined that the chip is a 555 timer. Signetics released the 555 timer in mid-1972 4 and the chip below has a January 1973 date code (7304), so it must be one of the first 555 timers. Curiously, it is not labeled 555, so perhaps it is a prototype or internal version.3 I took detailed die photos, which I discuss in this blog post. The 555 ti
(read more)
It's been about four months since I last posted about Bagel, the new JavaScript-targeted programming language I've been working on. A lot has changed since then, but things are finally crystallizing and getting into a clear-enough space where I feel comfortable sharing some concrete details (and real code!). The past four months of this process have been a whirlwind. The compiler architecture has been turned on its head several different times as I figure out what it takes to build a static type-checker. Bagel's design, too, has gone through significant re-thinks. The core goals remain the same as they were in the original post, but some of the specific plans have gone out the window when it comes to individual features and semantics. So first I want to talk about what has and hasn't changed, and then I want to finally present some examples of real, functioning Bagel code. What hasn't changed # Bagel is still a statically-typed language that compiles to JavaScript. It still has a hard separation between side-effect-free functional code, and side-effect-y procedural code. It still has reactivity to the mutation of application state/data as a first-class citizen. It still aims to be approachable and familiar to people who already know JavaScript/TypeScript, and to maintain as much of the semantics of those as possible while refining them, expanding them, and sanding off the rough edges. What has changed # Here are some things I talked about in the original post that have since changed: Functions will not be curry-able or partially-applicable by default. Originally I thought this was a no-brainer without any downsides, but a helpful commenter on Hacker News opened my eyes to the limitations it brings, so I decided to drop it. That said, just like in JavaScript it will be easy to write your own partially-applicable functions by hand. The pipeline operator will also be kept, even thought it's less important now. I didn't explicitly state that Bagel would use MobX under the hood, but that was the plan at the beginning. Since then I've decided to switch over to a custom-written reactivity system, for a few different reasons, but the core semantics/concepts still line up with MobX-style thinking. Bagel will not have classes. Initially my thinking was that classes would be useful for UI components and/or state stores, but I came up with alternate approaches for both of those; see more below. Bagel will not have "components", see more below. Components/classes/stores # This part of the design went through quite a journey, including a few crises about whether the whole project was even going to work at all, but it ended up in a place I'm pretty happy with. Here's where I started out: classes can be useful as holders of mutable state. Mutable state should be kept small, but that small nugget is embraced by Bagel as a core part of its philosophy. Therefore, classes should have a place too. In particular: MobX embraces classes as both global stores, and for (React) UI components with observable state. It seemed like a no-brainer. But here's the rub: Bagel isn't aiming to be a DSL for React-style UIs, I want it to be a general-purpose language suitable for games, web servers, scripting, even compilers. That includes its reactivity system. So this means that Bagel entities - and classes in particular - can't benefit from any kind of lifecycle awareness. I tried figuring out a way to track the lifecycle of class instances statically and it started to look like I would have to write a complete borrow-checker with ownership rules and everything, and to put it lightly, that just wasn't something I was interested in even attempting to do. Okay, so no lifecycle hooks, so what? Well the thing is: when you set up a MobX-style reaction, it can't be cleaned up automatically. It has to either have a global lifetime, or be disposed of in some way (in practice, usually on componentWillUnmount()). Otherwise, you get memory-leaks galore. I want Bagel to be free of footguns; requiring the user to clean up their reactions manually, without a standard way of doing so, sounds like a footgun. And automatically cleaning up reactions with dynamic lifetimes just wasn't feasible. So: reactions became global-only. That's where we are today; reactions can only be set up at the module level, and they live forever. This sounds limiting, but the thing is, reactions are really just for bridging your reactive code to the outside world. Inside your own logic, you don't really need them. In fact, I plan on forbidding reactions from changing application state at all; they can only observe application state and cause side-effects in the outside world. And the thing about the outside world, is... it's global. Okay, so we have global-only reactions, but how is a global reaction going to observe state tucked inside a transient class-instance somewhere? It isn't, really. So then I moved to, instead of having classes, you only have global class-like singletons called stores. These looked like classes - they had members of the usual kinds, some of those could be private, etc - but they could only exist as a global singleton. This made them better suited to global reactions. But, eventually I realized that this was a bit silly. A store was really just another namespace inside of a module that had some different syntax. Things would be much simpler (for both language-learners and the language-implementer!) if I did away with the store concept and just allowed plain-data let declarations at the module level instead. Instead of private state/members you could just have non-exported declarations. For readability I added entire-module imports, so you can still do store.foo() if you want. Global-only state and a total lack of components may sound bad, but here I looked to Elm, which also has global-only state and no real concept of components. Redux also puts most UI state in a single global store. Both of these demonstrated that real, full-scale apps can be written with mainly or exclusively global state, and in Elm's case, with no concept of components at all; only render-functions. The big difference, though, between these and Bagel is that you can skip all the message/command/reducer business and just mutate your state directly when it comes time to do that. Instead of a "UI component" you have a module that exports a render function, and event handlers that mutate state, and maybe some data types and some functions that construct instances of those types. There's one other reason to have components, though: memoization/avoiding re-renders. But Bagel has this covered too; in fact, it dovetails wonderfully with its reactivity model. Any function in Bagel can be marked as memo. With this, Bagel will memoize all of its return-values. Calling the same function with the same arguments will return a cached result instead of re-computing it. And, importantly, Bagel will invalidate the function's cached result whenever any of the arguments or any mutable state it captures in its closure is mutated. This is basically how MobX's computedFn works, and it's essential if we want to memoize over mutable data, which we do. So, just memo your render function, and if the relevant state doesn't change between app renders, the previous render's output will be re-used. I've written a simple GUI app using the above paradigm, and so far it works really nicely. What's been solidified # These are things that were only ideas, possibilities, or open questions last time, and have since congealed into realities or at least semi-firm plans: Nominal types are a "definitely", and are partially implemented. To maintain JavaScript/TypeScript semantics, they won't be quite as ergonomic as they are in a language like Rust or Elm, but they should be pretty close, and a lot better than the way discriminated-unions work in TypeScript. Consts (and types prefixed with const) will be completely,
(read more)
January 25, 2022 The Playground Introducing systemd-by-example.com, a playground for systemd that allows to do experiments with systemd right from your browser! Why? In the first post in the systemd series, I explained my approach to learning a new topic, which is driven by experimentation. I also described a way to conduct these experiments in case of systemd using podman containers, which avoids tampering with your real system. This setup has worked well for me, but I have to admit that it is a bit cumbersome. It involves juggling with a few different terminal windows to prepare the container setup, build and start the container, and interact with it. And if you simply want to try out one of the examples mentioned in the blog post, you first have to get the example files from GitHub or re-create them yourself. This creates a pretty big barrier for people to actually try out the examples and do their own experiments, and I’m not aware of many people who did. The playground systemd-by-example.com tries to remove this barrier by making the examples accessible through the browser. (This is heavily inspired by Julia Evans’ tools for easy experimentation through the browser, with the most recent tool Mess with DNS.) On the playground, you can start any of the examples in the blog posts with a single click and interact with it through a command line. Hopefully this makes it easier to follow the examples and get a deeper understanding. Following examples is a good way to get started. The real learning experience however comes from trying out things on your own. That’s why all examples are editable: you can change the unit definitions, add or remove unit files, and even edit the description. Once you are happy with a setup, you can create a unique link to it (using the share button); you can then bookmark this link to come back to it later, or share it with others. How? The playground allows you to define a set of unit files and create a systemd setup based on these units. When you click on Start system, it sends a request to the backend which basically follows the process described in the first post: it creates a new container image with these unit files and
(read more)
24 Jan 2022The most fundamental property a database can provide is durability. That is, once I’ve told you that your write has been accepted, if a mouse chews through the power cord for the server rack, the write will not be lost. This obviously is only possible to a degree. If someone goes into your SSD with a magnetized needle and a steady hand and tweaks their bank balance, then (short of replication) the best you can probably do is detect that it’s been changed via a checksum, unless, of course, they had the foresight to update that as well. If we had to enumerate a very rough l
(read more)
November 4th, 2015 Passwords are such a pain in the ass. No matter how much you like to avoid them, you'll always find yourself in a situation where you have to pass one to a tool in a non-interactive manner (lest you make people write expect scripts that inject a password into anything that prints "assword:" to the terminal). Doing so always carries some risk of exposure, and how to best deal with these tools is, as usual, a question best answered with "it depends". But since this is such a common problem, here are some considerations you may find worthwhile. Passwords
(read more)
Commands and Arguments -> You are invited to make additions or modifications so long as you can keep them accurate. Please test any code samples you write. All the information here is presented without any warranty or guarantee of accuracy. Use it at your own risk. When in doubt, please consult the man pages or the GNU info pages as the authoritative references. A new version of this guide is currently being drafted. For now, this guide is still the most complete and best reviewed. Any contributions to the new guide are welcome via GitHub forks. This guide aims to aid people interested in learning to work with BASH. It aspires to teach good practice techniques for using BASH, and writing simple scripts. This guide is targeted at beginning users. It assumes no advanced knowledge -- just the ability to login to a Unix-like system and open a command-line (terminal) interface. It will help if you know how to use a text editor; we will not be covering editors, nor do we endorse any particular editor choice. Familiarity with the fundamental Unix tool set, or with other programming languages or programming concepts, is not required, but those who have such knowledge may understand some of the examples more quickly. If something is unclear to you, you are invited to report this (use BashGuideFeedback, or the #bash channel on irc.libera.chat) so that it may be clarified in this document for future readers. You are invited to contribute to the development of this document by extending it or correcting invalid or incomplete information. The primary maintainer(s) of this document: -- Lhunath (primary author) -- GreyCat The guide is also available in PDF format. Alternatively, you can just hit print after going to FullBashGuide. That guarantees you'll be printing the latest version of this document. BASH is an acronym for Bourne Again Shell. It is based on the Bourne shell and is mostly compatible with its features. Shells are command interpreters. They are applications that provide users with the ability to give commands to their operating system interactively, or to execute batches of commands quickly. In no way are they required for the execution of programs; they are merely a layer between system function calls and the user. Think of a shell as a way for you to speak to your system. Your system doesn't need it for most of its work, but it is an excel
(read more)
I've been a Linux (or GNU/Linux, for the purists) user since 1996. I've been a FreeBSD user since 2002. I have always successfully used both operating systems, each for specific purposes. I have found, on average, BSD systems to be more stable than their Linux equivalents. By stability, I don't mean uptime (too much uptime means too few kernel security updates, which is wrong). I mean that things work as they should, that they don't "break" from one update to the next, and that you don't have to revise everything because of a missing or modified basic command.I've always been for development and innovation as long as it doesn't (necessarily, automatically and unreasonably) break everything that is already in place. And the road that the various Linux distributions are taking seems to be that of modifying things that work just for the sake of it or to follow the diktats of the Kernel and those who manage it - but not only.Some time ago we started a complex, continuous and not always linear operation, that is to migrate, where possible, most of the servers (ours and of our customers) from Linux to FreeBSD.Why FreeBSD?There are many alternative operating systems to Linux and the *BSD family is varied and complete. FreeBSD, in my opinion, today is the "all rounder" system par excellence, i.e. well refined and suitable both for use on large servers and small embedded systems. The other BSDs have strengths that, in some fields, make them particularly suitable but FreeBSD, in my humble opinion, is suitable (almost) for every purpose.So back to the main topic of this article, why am I migrating many of the servers we manage to FreeBSD? The reasons are many, I will list some of them with corresponding explanations.The system is consistent - kernel and userland are created and managed by the same teamOne of the fundamental problems with Linux is that (we shall remember) it is a kernel, everything else is created by different people/companies. On more than one occasion Linus Torvalds as well as other leading Linux kernel developers have remarked that they care about the development of the kernel itself, not how users will use it. In the technical decisions, therefore, they don't take into account what is the real use of the systems but that the kernel will go its own path. This is a good thing, as the development of the Linux kernel is not "held back" by the struggle between distributions and software solutions, but at the same time it is also a disadvantage. In FreeBSD, the kernel and its userland (i.e. all the components of the base operating system) are developed by the same team and there is, therefore, a strong cohesion between the parties. In many Linux distributions it was necessary to "deprecate" ifconfig in favor of ip because new developments in the kernel were no longer supported by ifconfig, without breaking compatibility with other (previous) kernel versions or having functions (on the same network interface) managed by different tools. In FreeBSD, with each release of the operating system, there are both kernel and userland updates, so these changes are consistently incorporated and documented, making the tools compati
(read more)
Starting in 1991, every copy of MS-DOS (and many versions of Windows) included a hidden artillery game called Gorillas. It inspired a generation of programmers and drew the ire of computer lab instructors everywhere. Here’s how it came to be—and how to play it today. The Simple Magic of Gorillas It’s 1992, and you’re sitting in your school’s computer lab. In between assignments, you whisper to your friend, “Check this out.” In the C:\DOS directory, you run QBASIC.EXE, then load up GORILLA.BAS. Before long, you and a friend are two gorillas battling it out atop skyscrapers with exploding bananas. If you grew up with an IBM PC compatible during the early-mid 1990s, chances are high that you’ve either seen or played Gorillas, a free QBasic game first included with MS-DOS 5.0 in 1991. It was distributed with hundreds of millions, if not billions, of PCs in the 1990s. Gorillas builds off a long, proud lineage of artillery games on computers and game consoles. To play, you enter two variables: the angle of your banana and the power. You also have to take wind speed into account, which could blow your explosive banana off course. The Gorillas title screen. If you angle your launch just right and hit the other gorilla with your banana, it explodes, and your gorilla beats its chest in celebration. People who have played Scorched Earth or Worms will immediately be familiar with the basic mechanics of the Gorillas. With charming graphics (including CGA and EGA support), amusing sound effects, and simple two-player gameplay, Gorillas crammed a lot of timeless gameplay into just 1,134 lines of code. Until now, no one has ever explored how this legendary game came about. RELATED: PCs Before Windows: What Using MS-DOS Was Actually Like Tucking New Games into MS-DOS MS-DOS, the command-line operating system, debuted as PC-DOS with IBM PC in 1981. Up until the release of MS-DOS 5.0, Microsoft had never marketed its DOS operating system as a standalone showcase retail product. “Basically, the MS-DOS team previously had only shipped to OEMs and never retail,” recalls Brad Silverberg, then the Microsoft VP in charge of MS-DOS 5.0. Microsoft needed to spice things up because selling retail copies of MS-DOS individually wasn’t as much of a sure bet as selling to OEMs. “We had to build a compelling product and a compelling selling proposition,” Silverberg says. “It was a total change in the way both the product team and marketing team had to think. It had to be something people wanted to buy, rather than some software they didn’t have much choice about that was included with their new computer.” Microsoft With this in mind, Microsoft began adding notable features to MS-DOS 5.0 before launch, including an undelete utility, a graphic shell (DOS Shell), a full-screen text editor (MS-DOS Editor), and a new BASIC interpreter called QBasic. QBasic’s syntax differed dramatically compared to its predecessor, GW-BASIC, so Microsoft decided to include four example programs to help new programmers get started with the language. These programs came with file names such as MONEY.BAS (a personal finance manager), REMLINE.BAS (removes lin
(read more)
@schuhschuh opened an issue at https://github.com/gflags/gflags/issues/76 in 2015, to request for a feature of flag alias, and it has been opened for about 7 years.I happened to achieve a nice implement in cocoyaxi (co for short) today. It is easy to define a flag with an alias in co:DEF_bool(debug, false, "", d); // d is an alias of debug Now we can use either --debug or -d in command line or in a config file.The magic is that we can add any number of alias to a flag:DEF_bool(debug, false, ""); // no alias DEF_bool(debug, false, "", d, dbg); // 2 aliases Details can be seen in this commit on GitHub.
(read more)
The Linux Vendor Firmware Service (LVFS) with Fwupd for firmware updating on Linux could soon be making it easier to transition older, end-of-life devices off official firmware packages and onto the
(read more)
Writing shell scripts used to be a major, major pain for me. I remember many frustrating sessions, where I tried to find a misplaced quote or a missing backtick. I cursed shell script and only used it as a last resort. In those days, I would never, ever have thought, that I would write 100K lines of shell script code for a project and not even mind very much doing so. The main reason for this change of mind is ShellCheck. Combined with a colorizing syntax highlighter in an editor like Sublime Text ShellCheck makes the previously tedious search for that elusive missing backtick or doublequote super easy, barely an inco
(read more)
I like the number guessing game, conceptually. Using this as a first example is especially rewarding in Python since it imposes so few syntax restrictions on its users. When an absolute beginner looks
(read more)
January 23rd, 2022 1 A Brief Aside: Language Design is Holistic 2 Background Concepts 2.1 Background Concept 1: What Is Uninitialized Memory, And Why It’s Useful 2.1.1 Uninitialized Memory Is Good Actually 2.1.2 Expanding The Scope of Uninitialized Memory 2.1.3 Freeing Pointers: Spooky Deinitialization At A Distance 2.2 Background Concept 2: Ownership 2.2.1 Ownershiplang: Pseudocode and Terminology For Ownership 2.2.2 Ownership: What’s The Point? 2.3 Background Concept 3: Definite Initialization Analysis 3 Problems and Problematic Features 3.1 Problem: Constructor Transactionality 3.1.1 Java Constructors 3.1.2 Rust Constructors 3.1.3 C++ Constructors 3.1.4 Swift Constructors 3.2 Problem: Reassignment 3.2.1 Reassignment Desugar 1: drop-construct 3.2.2 Reassignment Desugar 2: construct-drop-move 3.2.3 Reassignment Desugar 3: construct-clone_from 3.3 Problem: Delayed Initialization 3.4 Problem: Moving Out 4 Implementing Ownership 4.1 Ownership Strategy: Null Sentinels (And Reference Counting with Moves) 4.2 Ownership Strategy: Empty Sentinels (C++ RAII and Move Semantics) 4.3 Ownership Strategy: Smuggled Drop Flags (Early Version of Rust) 4.4 Ownership Strategy: Stack Drop Flags (Rust) 4.5 Ownership Strategy: Static Drop (Rejected Rust Proposal) 5 IM FUCKING DONE Disclaimer: this was entirely written in a fever dream over two days, I have no idea what this is. So you’re making a programming language. It’s a revolutionary concept: the first language optimized for the big bappy paws of cats. It’s going great – you’ve got yourself a sophisticated LR(BAPPY) parser; some basic types like Int, and FoodBowl; and some basic operations like if, and purr. But now you’ve reached that dreaded point: you have to implement some kind of heap allocation. Well that part’s fine. The real nasty part is deallocation. It’s time. It’s time to destroy some values. And so you build a world-class concurrent tracing generational garbage collector and call it a day. Problem Solved. But is it really? When we talk and think about “destructors” or “deinitialization” the primary focus is always on memory management, and for a good reason: i
(read more)
Last week I had a thought: “What’s the simplest Lisp interpreter I could write, which supports macros?"A weekend of Clojure hacking and some confusion later, a REPL was born. In this essay, we’ll go through the same journey and make our own Lisp interpreter, which supports macros…in Clojure! Let’s call it, LLisp: a lisp in a lisp. By the end, you’ll write your own macros in your own darn programming language! Let’s get into it.Okay, let’s sketch out the basic idea behind what we need to do: A programmer writes some code. So far, it’s just text, and there’s not much we can do with that. We read the programmer’s text, and convert it into data-structures. We can do something with data-structures: we can evaluate data-structures, and return whatever the programmer intended. If we can convert "(+ 1 1)" into 2, we have ourselves the roots of a programming language.Let’s handle “2. Read”. If a programmer writes text like "(+ 1 1)", we want to convert it to data-structures. Something like:(read-string "(+ 1 1)") ; => (+ 1 1)We could write read-string ourselves. For the simplest case it’s pretty easy [1]. But, we’re in Clojure after all, and Clojure already understands how to read Lisp code. Let’s just cheat and use Clojure’s edn: (ns simple-lisp.core
(read more)
As I mentioned last week, your TCB is important and without a good one, your capabilities are quite limited when it comes to attestation. So, all right, we now live in a world where we know bugs like TPM Carte Blanche exist, and we can never go back to the world where it doesn’t. (Actually, we’ve been living in that world since 2014.) So what do we do now? The best thing to do is probably to find a hardware Root of Trust for Measurement. Your CPU vendor may have some ideas, but a non-exhaustive list probably doesn’t leave out Boot Guard or Platform Secure Boot. What about the millions of devices out there in the world today that don’t have a hardware RTM? Well, it turns out TPM 2.0 has some uncommonly-used features that can come in handy here. TPM 2.0 has a feature called audit sessions, which you can read about in Part 1 of the TPM 2.0 specification. Audit sessions serve two purposes: An audit session with a session key (set up by “salting” or “binding” the session) causes the TPM to use that key to calculate an HMAC over its response. An audit session digest can be explicitly attested by an Attestation Key for consumption by remote verifiers, using a command called TPM2_GetSessionAuditDigest. When you set up an audit session, it gets initialized with an “empty digest” (a buffer of all 0x00 bytes the size of the session hash algorithm like a PCR). Each time you send a command in the audit session, the session is extended: \[commandParameterHash = hash(\hspace{0.5em}commandCode\hspace{0.5em}\| \hspace{0.5em}names\hspace{0.5em} \| \hspace{0.5em}commandParams\hspace{0.5em})\] where \(commandCode\) is the command code constant associated with the command, \(names\) is the concatenation of all authorized TPM Names (TPM object unique identifiers; hash of public area or primary seed handle value) used in the command, and \(commandParams\) is the concatenation of all the command parameters. \[responseParamHash = hash(\hspace{0.5em}responseCode\hspace{0.5em} \| \hspace{0.5em}commandCode\hspace{0.5em} \| \hspace{0.5em}responseParams\hspace{0.5em})\] where \(responseCode\) is a TPM_RC value and \(respons
(read more)
I like to tinker with the TPM in my spare time. It’s like a great big box of security legos, or like a dryer, punk-er form of Minecraft. It’s pretty fun. In 2017 I had the privilege of working on mitigations for an issue called ROCA. From then on, I’ve been fascinated by the idea of the Trusted Computing Base. I believe that Murphy’s Law applies equally to code as it does to which way buttered toast will fall, or whether two intersecting clues in the New York Times crossword will be unusual names of minor celebrities from the 1970’s. The meaning of the TCB isn’t so much “your system is safe because of this smart stuff in this box” as it is “your system is utterly booched if when we find any important mistakes in this box”. Also, the amount of mistakes we know about in any given box is a monotonically increasing function of time. Nobody says “good news, we’ve discovered some unexpectedly correct behavior in your kernel.” I recently came into the possession of a Surface Pro 3, which is a machine that I happen to know shipped with TPMs affected by ROCA. I thought “ah, this is my chance to apply my superficial understanding of finite-field arithmetic to learn some more about this bug and how it was discovered.” So, I installed Linux on it, sshed in, and installed some TPM tools. My go-to “hello world” TPM tool is gotpm and reading the PCRs is a pretty basic TPM activity that lets you know you’re talking to a TPM. So, when I was greeted with these PCRs, I knew that obviously I had made a mistake, and I was talking to some simulated, shim TPM or something: 0: 0000000000000000000000000000000000000000000000000000000000000000 1: 0000000000000000000000000000000000000000000000000000000000000000 2: 0000000000000000000000000000000000000000000000000000000000000000 3: 0000000000000000000000000000000000000000000000000000000000000000 4: 0000000000000000000000000000000000000000000000000000000000000000 5: 0000000000000000000000000000000000000000000000000000000000000000 6: 0000000000000000000000000000000000000000000000000000000000000000 7: 0000000000000000000000000000000000000000000000000000000000000000 8: 0
(read more)
A while back I had the opportunity to work with Common Lisp professionally. As has happened to many before and after me, a lot of the powerful features of Common Lisp and its implementations made me a little drunk on power for a while, but I quickly recovered. Some things stuck with me, however. Among them was setf, which feels similar yet different to another concept that I quite adore: Lenses. As with lenses and many other concepts before them, I decided to try to understand setf more deeply by implementing it in Carp. The final pull request to the Carp standard library got rejected—by myself, no less!—, but the library lives on! In this blog post, we’re going to look at how to implement setf together. It’s going to be an interesting riff on what we did when we implemented
(read more)
This living document comprises my recommendations for how to organize and manage Rust CLI applications. In this document, I cover some tips and best practices for writing Rust applications, informed by my experience writing real-world Rust tools. I've focused on command-line tools here, but many of the suggestions can be generalized to graphical and server applications as well. I hope you find them useful for your own applications. If you haven't gone through the Rust CLI Book yet, I
(read more)
Last week someone posted a /r/cpp thread titled “Declaring all variables local to a function as const”: Ok, so I’m an old-school C++ programmer using the language now since the early ’90s. I’m a fan of const-correctness for function and member declarations, parameters, and the like. Where, I believe, it actually matters. Now we have a team member who has jumped on the “everything is const” bandwagon. Every code review includes dozens of lines of local function variables now declared const that litter the review. Intellectually, I understand the (what I consider mostly insignificant) arguments in favor of this practice, but in nearly 30 years I have never had a bug introduced into my code because a local function variable was mutable when I didn’t expect it. It does nothing for me to aid in code analysis or tracking. It has at most a tiny impact on performance. Maybe I’m just an old dog finally unable to learn a new trick. So, on this fine Wednesday, who’s up for a religious war? What are y’all doing? TLDR: I’m not putting const on all the things, either. In function signatures: the good First, let’s clarify that it’s important to pu
(read more)
Introduction to memory-mapping Note: This section is introductory material for those who are not yet familiar with the concept of memory-mapping. If you are already experienced with memory-mapping feel free to jump to the next section. Most likely you won’t miss anything new. One of the most common ways of accessing peripherals from a CPU is memory-mapping. In short, this means that the address space of the CPU has some addresses that when accessed read/write peripheral’s registers. In order to access such peripherals from our code there are multiple strategies that could be used. This post will explore multiple alternatives and discuss their differences and fitness for their unique task. As an example of memory-mapping we will have a look at a STM32F030 microcontroller. This is one of the simplest 32-bit ARM Cortex-M MCUs from ST Microelectronics. The architectural information we need is usually described in a Reference Manual document. This MCU contains an ARM Cortex-M0 core that interfaces via a Bus Matrix with multiple peripherals. The bus matrix provides access to multiple components of the MCU. Amongst them, we have the following: Internal RAM memory. Internal Flash memory. A connexion to an AHB1 bus, which bridges to an APB bus. AHB is a bus designed by ARM part of the AMBA standard. It is a de-facto standard for MCU buses in the ARM Cortex-M world and normally interfaces to high speed peripherals. APB is another bus also part of the AMBA standard. It is a lower-speed bus dedicated to peripheral accesses, which normally do not require large throughput. A second AHB2 bus dedicated to GPIO ports. Notice how GPIO ports have a dedicated AHB2 bus. This makes s
(read more)
In a Clang/GCC -g1 or -g2 build, the debug information is often much larger than text sections. Some assemblers and linkers offer an optional feature which compresses debug sections. History In 2007-11, Craig Silverstein added --compress-debug-sections=zlib to gold. When the option was specified, gold compressed the content of a .debug* section with zlib and changed the section name to .debug*.zlib.$uncompressed_size. In 2008-04, Craig Silverstein changed the format and contributed Patch to handle compressed sections to gdb. The compressed section was renamed to .zdebug*. In 2010-06, Cary Coutant added --compress-debug-sections to gas and added reading support to objdump and readelf. ELF Section Compression has a nice summary of the .zdebug format. The article lists some problems with the format which led to a new format standardized by the generic ELF ABI in 2012. I recommend that folks interested in the ELF format read this article. My thinking of implementing ELF features has been influenced by profound discussions like this article and other discussions on the generic ABI mailing list. In Solaris 11.2, its linker introduced -z compress-sections to compress candidate sections. The generic ABI format led to modification to the existing assembler and linker options in binutils. In 2015-04, H.J. Lu added --compress-debug-sections=[none|zlib|zlib-gnu|zlib-gabi] to gas and added --compress-debug-sections=[none|zlib|zlib-gnu|zlib-gabi] to GNU ld. In 2015-07, H.J.
(read more)
FreeM is an implementation of the M programming language, began by the efforts of the mysterious Shalom ha-Ashkenaz. In response to InterSystems' spree of buying up all competing M implementations, Shalom gifted FreeM to MUG Deutschland in 1998, in hopes that the M community would turn it into a viable, freely available, and fully-featured M implementation. After years of dormancy, the FreeM project has been resurrected, and under the stewardship of Coherent Logic Development and a small core team of contributors, work is proceeding towards completing the original FreeM team's goals
(read more)
I spent a few days playing around with bootloaders for the first time. This post builds up to a text editor with a few keyboard shortcuts. There are a definitely bugs. But it's hard to find intermediate resources for bootloader programming so maybe parts of this will be useful. If you already know the basics and the intermediates and just want a fantastic intermediate+ tutorial, maybe try this. It is very good. The code on this post is available on Github, but it's more of a mess than my usual project. Motivation: SnakeYou remember snake bootloader in a tweet from a few years a
(read more)
There are a bunch of operations you may want to perform before the rendered response in conn is sent to the client, such as minification. In this post I'll show you how to do it easily. Plug.Conn allows you to register a "before send" hook via register_before_send/2 function: require Logger Plug.Conn.register_before_send(conn, fn conn -> Logger.info("Sent a #{conn.status} response") conn end) It's very handy especially if you want to process conn with a plug. Here is an example from a simple minify_response library: defmodule MinifyResponse.HTML do alias Minify
(read more)
There's been a lot of buzz lately about GitHub Copilot mostly as it pertains to code quality and copyright law. For the most part, I appreciate Copilot's suggestions, but just like with Stack Overflow before it, I know better than to blindly accept code without reviewing it. That said, it's a nuance that I have no intention of getting into any time soon.One thing that I did find interesting, though, was that Copilot doesn't just limit itself to code. It also likes to insert itself into my writing. So, as an experiment, I decided to let Copilot write a post for me.The insanity that follows is what it came up with. With the exception of a starting quote character (>) to set the tone, Copilot wrote every single line below."The only way to avoid working is to work."-- E. W. Dijkstra The Softwa
(read more)
Back in December, I saw a tweet that piqued my interest. https://twitter.com/hundredrabbits/status/1466441362354573314 I don't have an esolang, but I did have the seeds of a Java DSL for building classes at runtime. That DSL builds on top of ASM to let you write any code you want in what looks like a high level language. I'd been letting the idea languish for a bit, but after seeing the tweet, I got back to work. Unfortunately, December is a busy time, so you're getting this post in the second half of January. My idea to write a compiler/DSL1 started with needle, my project to compile regexes to Java bytecode. In that project, each regular expression is compiled to a class implementing its particular matching logic. I'd been using ASM in that project, alongside some very small
(read more)
Blogging about Python and the Internet Strict Python function parameters Published 2022-01-23 — ❤︎ Subscribe for more via the newsletter or RSS What do you think about when writing a new function in Python? The function name, parameter names, optional/required parameters, and default arguments are all on the list. Here is a simple Python function that has all these covered: def process_data(data, encoding="ascii"): # Fancy data processing here! However, there's one aspect many programmers have an opinion about but don't realize can be encoded into the function definition: How should callers specify each argument to the function? For the above funct
(read more)
In this section we summarize some guiding principles for designing and organizing scientific Python code. Collaborate¶ Software developed by several people is preferable to software developed by one. By adopting the conventions and tooling used by many other scientific software projects, you are well on your way to making it easy for others to contribute. Familiarity works in both directions: it will be easier for others to understand and contribute to your project, and it will be easier for you to use other popular open-source scientific software projects and modify them to your purposes. Talking through a design and the assumptions in it helps to clarify your thinking. Collaboration takes trust. It is OK to be “wrong”; it is part of the process of mak
(read more)
One of my favorite stories by Isaac Asimov is Profession. The following is a spoiler, so please read the story before proceeding if you don't like spoilers. In the futuristic society of year 6000-something, people no longer need to learn their profession from books, lectures or hands-on experience. Each person has their brain analyzed at a certain age and then the know-how for the occupation that's best suited for them is simply uploaded into the brain using special cassettes (hey, this story is 60 years old) and electrodes. The folks who end up the best at their craft (determined via competitions) end up with high-demand assignments on "Class A" outer worlds. The protagonist, George Platen, has a dream of getti
(read more)
While making devices more repairable is pretty much seen as universally a good thing, right? Unfortunately, engineering involves tradeoffs, but some of those tradeoffs that are seen as bad for repair (or are actually desirable in spite of it), or actually improves reliability. These are some things I suspect right to repair advocates forget. This article is intended to unify some disparate thoughts on the subject I’ve had on Lobsters comment, this blog (i.e. the ThinkPad one), etc. as one post. I intend to do this more often for other things… Computers last longer than they used to This is less related to reliability, and more about how upgrading systems just isn’t like how it used to be. Computers were obsolete out of the box and became less useful over the yea
(read more)
Though Facebook is really good at a few things -- being a rage amplifier; providing a clean, well-lit space for fascists; and allowing unmedicated schizophrenics to find each other and thereby elevate their delusions into national movements -- it's important to remember that they are actually stultifyingly incompetent at just about everything that comprises what most people think their business is.Sadly, my businesses still have a presence on Facebook and Instagram because choosing not to use those services essentially means choosing not to advertise, and that's not really a stand we can afford to take during this pandemic apocalypse.And since I still have to manage this shitshow, here's me pissing in the wind again about how terrible it is to try and actualy use it.I've written bef
(read more)
Reckless Drivin' is a shareware Macintosh game released by Jonas Echterhoff in 2000. Jonas released the source code on GitHub in 2019, but the game is difficult to compile due to the dependency on deprecated Apple system calls and the CodeWarrior project structure. I have been working on Open Reckless Drivin' off and on over the last couple years to modernize the code and release the game for all platforms, and my previous post about Open Reckless Drivin' explains more about the my goals with this project. This post shares interesting things I learned while unpacking the game assets from the binary file Jonas released. While I used an Apple PowerBook G4 to play Reckless Drivin' when I was younger, I never used a PowerPC Mac at an age when I could understand things like QuickDraw, Resour
(read more)
I wouldn’t be a data scientist if I hadn’t been a stalled novelist. It’s also possible that I wouldn’t be a published author if I hadn’t become a data scientist. Eight years ago, I was a frustrated writer, exhausted by my unsuccessful efforts to get published. I needed something new, and it felt natural to return to my beginnings: science. As an undergraduate, I’d majored in biology, drawn to its pursuit of knowledge and capacity to solve problems. Interesting things were happening in the field of biomedical engineering, so as I pondered a new career, I set my sights there. I took mathematics courses at a local university. Instead of writing in the evenings after work, I studied math. On a whim, I enrolled in a computer science course, and I was hooked
(read more)
Let us solve a Wordle with grep. We will rely on the words file that comes with Unix or Unix-like systems and the grep command. The Wordle game published on 22 Jan 2022 is solved in this post. The output examples shown below are obtained using the words file /usr/share/dict/words, GNU grep 3.6, and GNU bash 5.1.4 on Debian GNU/Linux 11.2 (bullseye). In this post, we will solve the Wordle in a quick and dirty manner. We will not try to find the most optimal strategy. The focus is going to be on making constant progress and reach the solution quickly with simple shell commands. The steps below show how to solve a Wordle in this manner. Make a shell alias named words that selects all 5 letter words from the words file. $ alias words='gre
(read more)
There’s a niche genre of music on the internet called “oscilloscope music”. This is electronic music that is designed to be visualized with an oscilloscope. Music visualizers have existed for a long time, but they often just display an image that represents the audio abstractly. Oscilloscope music allows the musician to draw arbitrary shapes using sound. Compare this to the original video: Here’s the finished project on Github. How it Works Oscilloscope music depends on a few simple properties of stereo audio and oscilloscopes. Stereo audio consists of two audio channels, left and right. Each channel is a sequence of audio samples that range from -1 to 1. An oscilloscope set to XY mode will use these two channels of data to move a “pen” of light aro
(read more)
Linux on a 486SX  A year or two ago, I came into posession of a 1993 Compaq Presario 425.  It's a Mac-ish all-in-one with a 14" color (S?)VGA screen capable of 800x600 in full 256-color glory.  On the inside it features a 25MHz i486SX (the sort without a floating-point unit), paired with 20MB of RAM - a significant upgrade over the stock 4MB.  When I received it it had a nearly-dead 1.6GB hard drive[1] in it, which I replaced with a far overkill 150GB drive[2].  It also has a single 3.5" floppy drive.  On the back it has two PS/2 ports for keyboard and mouse input, serial and parallel ports, and two ports for the built-in modem.  There is also what appears to be a sound card, along with a single 36-pin connector I don't recognize.  The hard drive it came with had (as far as
(read more)
Hi all! This is the first installment of a series of articles I intend to write in early 2022. With Slackware 15.0 around the corner, I think it is a good time to show people that Slackware is as strong as ever a server platform. The core of this series is not about setting up a mail, print or web server – those are pretty well-documented already. I’m going to show what is possible with Slackware as your personal cloud platform. A lot of the work that went into developing Slackware between the 14.2 and 15.0 releases was focusing on desktop usage. The distro is equipped with the latest and greatest KDE and XFCE desktops, a low-latency preemptive kernel, Pipewire multimedia framework supporting capture and playback of audio and video with minimal latency, et cetera. I have been enjoying Slackware-current as a desktop / laptop powerhouse for many years and built a Digital Audio Workstation and a persistent encrypted Live Distro with SecureBoot support out of it. Slackware Cloud Server Series – Summary The imminent release of Slackware 15.0 gives me fresh energy to look into ‘uncharted’ territory. And therefore I am going to write about setting up collaborative web-based services. Whether you will do this for the members of your family, your friends, your company or any group of people that you interact with a lot, this year you will hopefully learn how to become less dependent on the big ‘cloud’ players like Google, Microsoft, Zoom, Dropbox. What I will show you, is how to setup your own collaboration platform on a server or servers that you own and control. Think of file-sharing, video-conferencing, collaborative document editing, syncing files from your d
(read more)
The National Association of Independent Schools developed an Elixir implementation of the Paul Bourke's Conrec algorithm to calculate the drive-time to a particular location. The Conrex hex package is now available to developers here on GitHub. Most map-based apps calculate the distance from a central point outward. That’s helpful if you want to see how long it will take you to fly somewhere else, but not so helpful if you want to calculate how long it will take customers to drive through traffic to get to your location. Conrex uses a convergent isochrone to calculate real traffic and topographic conditions. An implementation of Paul Bourke's CONREC algorithm in Elixir, Conrex is now available to the open source community. NAIS developed Conrex for its Market View app. Market View helps schools find children are within a reasonable driving distance of the school. It can also be used to map bus routes, commute times, or to determine a new location for a business. Installation Conrex can be installed by adding conrex to your list of dependencies in mix.exs: def deps do [ {:conrex, "~> 1.0.0"} ] end Usage The main algorithm outlined by Bourke can be invoked with Conrex.conrec: iex> Conrex.conrec(values, x_coords, y_coords, contour_levels) where values is a 2D list of samples (heights, travel times, etc), x_coords and y_coords are lists of X and Y coordinates for the sample grid, and contour_levels is a list of values at which a contour should be calculated. Conrex.conrec outputs a list of line segments to match the classic algorithm. If the X and Y values are GPS coordinates, you can use Conrex.contour_polygons to generate GeoJSON polygons for each c
(read more)
Are you trying to establish a good end-to-end testing infrastructure at your company? This is how Facebook does it. The problem End-to-end (E2E) tests verify that the product works on the high level. For example, if you have an e-commerce website, E2E test could simulate an important user behavior: open the website in a browser, search for a product, add it to the cart and performing checkout. It’s hard to make end to end tests reliable, because (by definition) these tests rely on all components of your system. If each component has a reliability of 99%, the test that depends on ten systems has 10% chance to fail. This matters even more when you run hundreds of tests a day. E2E tests often get bad reputation among developers due to this flakiness . See Google Testing Blog: Just Say No to More End-to-End Tests. When the engineering organization is growing, we have to scale our tooling. We can’t expect thousands engineers to be experts in end-to-end testing. So whenever somebody is writing a new test or investigating a failure, it should not require prior experience dealing with E2E tests. The solution Over the years engineers at Facebook figured ways to improve E2E testing practice. You can watch the tech talk linked at the end for more details. Here are some improvements that I think were really important. 1. Make testing API declarative This is very basic trick, but many engineers writing test don’t realize that testing code is also, eh, code, and it will benefit from a healthy amount of abstraction. Most examples on the internet about how to write E2E tests are very basic and look something like this: findElement('#email').enterText('[email protected]'); findElement('#password').enterText('secret'); findElement('#submit').click(); This is a very low level code. If you stick to this API you’ll end up with a lot of copy-pasted tests that rely on implementation details. The moment somebody changes your login sequence details all tests will fall apart. Consider loginWithAccount('[email protected]', 'secret'); Now the API became much better. It communicates the intent well and abstracts away the implementation details. When an engineer
(read more)
In different programming languages, the behavior of virtual functions differs when it comes to constructors and destructors. Incorrect use of virtual functions is a classic mistake. Developers often use virtual functions incorrectly. In this article, we discuss this classic mistake. Theory I suppose the reader is familiar with virtual functions in C++. Let’s get straight to the point. When we call a virtual function in a constructor, the function is overridden only within a base class or a currently created class. Constructors in the derived classes have not yet been called. Therefore, the virtual functions implemented in them will not be called. Let me illustrate this. Explanations: Class B is derived from class A;Class C is derived from class B;The foo and bar functions are virtual;The foo function has no implementation in the B class. Let’s create an object of the C class and call these two functions in the class B constructor. What would happen? The foo function. The C class has not yet been created. The B class doesn’t have the foo function. Therefore, the implementation from the A class is called.The bar function. The C class has not been created yet. Thus, a function related to the current B class
(read more)
One of the core data types in PHP is the array. Mostly unchanged since the early beginnings of the language. The name "array" is a bit unfortunate, as well as the implementation. It is not really an array. In fact it is some sort of Frankenstein combination of a list and a dictionary, known from other languages. This is quite confusing and can cause unexpected and sometimes nasty effects. Sometimes, it breaks stuff. That happened to me last week. More on that later. First, to get things clear, let's talk about the difference between lists and dictionaries. Lists A list, sometimes also known as an array, is like the name suggests, a list of elements of any type. These elements are ordered, and every element has a numeric index, starting with 0. Example in Javascript: let myList = [2, 1, 'foo', new Date()]; let myElem = myList[2]; // myElem contains 'foo' Dictionaries In Python it's called a dictionary, Perl and Ruby call it a hash, in Javascript / JSON it's known as an object. Which is also rather confusing but that's for another time. Whatever it's name, a dictionary is a collection of key/value pairs. Those key/value pairs don't necessarily a fixed order. The keys are strings, the values can be anything. And every key is unique. An example in Python: myDict = {'foo': 'bar', 'boo': 'baz'} myElem = myDict['foo'] # myElem contains 'bar' myDict['boo'] = 'bla' print(myDict) # output: {'foo': 'bar', 'boo': 'bla'} PHP's "Frankenstein" array Once upon a time, the creators of PHP thought it would be a Good Idea to merge lists and dictionaries into one data type, which, to make things worse, they named "array". With the following effects: elements in a PHP array are always ordened elements in a PHP array can have a string based key, or a numeric index these numeric indexes can be consecutive (spoiler alert: this is essential!) Hmmm, I wonder if that could lead to problems. Let's see how this works. Lists in PHP $myArray = [ 'element 1', 'element 2', 'element 3', ]; print_r($myArray); print_r($myArray[1]); returns as output: Array ( [0] => element 1 [1] => element 2 [2] => element 3 ) element 2 Looks intuitive, th
(read more)
The ldd utility is more vulnerable than you think. It's frequently used by programmers and system administrators to determine the dynamic library dependencies of executables. Sounds pretty innocent, right? Wrong! In this article I am going to show you how to create an executable that runs arbitrary code if it's examined by ldd. I have also written a social engineering scenario on how you can get your sysadmin to unknowingly hand you his privileges. I researched this subject thoroughly and found that it's almost completely undocumented. I have no idea how this could have gone unnoticed for such a long time. Here are the only few documents that mention this interesting behavior: 1, 2, 3, 4. First let's understand how ldd works. Take a look at these three examples: [1] $ ldd /bin/grep linux-gate.so.1 => (0xffffe000) libc.so.6 => /lib/libc.so.6 (0xb7eca000) /lib/ld-linux.so.2 (0xb801e000) [2] $ LD_TRACE_LOADED_OBJECTS=1 /bin/grep linux-gate.so.1 => (0xffffe000) libc.so.6 => /lib/libc.so.6 (0xb7e30000) /lib/ld-linux.so.2 (0xb7f84000) [3] $ LD_TRACE_LOADED_OBJECTS=1 /lib/ld-linux.so.2 /bin/grep linux-gate.so.1 => (0xffffe000) libc.so.6 => /lib/libc.so.6 (0xb7f7c000) /lib/ld-linux.so.2 (0xb80d0000) The first command [1] runs ldd on /bin/grep. The output is what we expect -- a list of dynamic libraries that /bin/grep depends on. The second command [2] sets the LD_TRACE_LOADED_OBJECTS environment variable and seemingly executes /bin/grep (but not quite). Surprisingly the output is the same! The third command [3] again sets the LD_TRACE_LOADED_OBJECTS environment variable, calls the dynamic linker/loader ld-linux.so and passes /bin/grep to it as an argument. The output is again the same! What's going on here? It turns out that ldd is nothing more than a wrapper around the 2nd and 3rd command. In the 2nd and 3rd example /bin/grep was never run. That's a peculiarity of the GNU dynamic loader. If it notices the LD_TRACE_LOADED_OBJECTS environment variable, it never executes the program, it outputs the list of dynamic library dependencies and quits. (On BSD ldd is a C program t
(read more)
January 21, 2022 I recently read through James Turner’s “Open Source Has a Funding Problem” on the Stack Overflow blog. I recommend it. Great addition to all the new writing on open software funding and business realities. There was one minor error that bothered me, mentioned just in passing: For example, having a dual licensing of MIT for non-commercial and a custom license for commercial purposes. I get what that means. I think others will, too. But legally, it doesn’t work. Don’t say “MIT, but only for noncomm
(read more)
#[turn_off_the_borrow_checker]Expand descriptionYou can’t “turn off the borrow checker” in Rust, and you shouldn’t want to. Rust’s references aren’t pointers, and the compiler is free to decimate code that tries to use references as though they are. However, if you would like to pretend the borrow checker doesn’t exist for educational purposes and never in production code, this macro that will suppress many (though not all) borrow checker errors in the code it’s applied to. fn main() { let mut source = 1; let mutable_alias = &mut source; source = 2; *mutable_alias
(read more)
I wrote this a few years ago (before 2012), about the idea that a method or function should have only one “exit point”, i.e at most one return statement. It is now hosted here for reference. Fortunately, this “law” seems to becoming less common. The article is still generally my opinion. To sum up: Do you code in C or similar old-school, low-level language? If so, stop reading now because the rest of this article does not apply to your practices. The arguments in favour of the single-return style originated in the C programming language for reasons of manual resource management. But these reasons are irrelevant to Java, C#, JavaScript, Ruby, Python etc. There is a high bar to clear to call something a “law” and the idea that “a method should have at most one return statement” does not meet it for modern languages. There is no formal study that shows that this rule leads to safer, more readable or otherwise better code in these languages. It is therefore just a style. It is not a useful style when applied as a blanket rule. There are cases where a single return is more readable or simpler, and cases where it isn’t. If you learn when and how to use multiple returns, you can write more expressive code. Do not blindly follow cargo-cult rules. Objections At the time the blog post attracted some comments. So this style must be an “important” issue to coders, like tabs vs. spaces. Most of the outraged, dogmatic replies that I have received are along the lines of “but in this case a single return is better.” No doubt, but that is no contradiction to what I am saying: Both styles have uses, so learn to apply both, and choose which best fits your case. If there is resource de-allocation or logging at the end of the method that cannot be handled by a block structure of the language then you might prefer a single exit point in that case. Examples of long, confusing methods are not an argument either way. You can write them in either style, and the cure is the same: refactor to extract methods. If you somehow still think that the single-return rule is applicable across all languages, please read about match expressions in F#, Erlang’s case expressions or Haskell’s pattern matching and get back to me. In those constructs, you cannot avoid using multi
(read more)
Alloy is a language and analyzer for formal software modeling. As a way of starting to learn Alloy I model a toy design that I know to be broken: Python pip’s legacy dependency resolution algorithm. By the end of this article you will be able to: Use Alloy to model and analyze a software design with state that varies over time. Analyze Python pip’s legacy dependency resolution algorithm. Explore a software design iteratively using formal modeling. The opinions expressed in this blog are my own and not necessarily those of my employer. Introduction It is well known that Python’s pip’s legacy dependency resolution algorithm allows users to uninstall previously installed packages with different versions. This can break previously installed packages, or even worse be relied upon in order for them to work. The new version of pip has a new dependency resolver to prevent this from happening. In this article we iterately design a package manager using formal methods and discover that pip’s approach was doomed. We then use the model to see options for resolving this. You can follow along by downloading Alloy if you’d like. I don't publish often, but when I do you'll be the first to hear about it. Prior art, references, and other resources Formal Software Design with Alloy 6 is a work-in-progress book documenting the most recent release of Alloy, Alloy 6. I learned a lot following the guide, and this is a great starting point for learning about Alloy. I knew that I wanted to model a simplified version of pip’s legacy dependenc
(read more)
zsh-autoquoter is a zle widget ("zsh plugin") that will automatically put quotes around arguments to certain commands. So instead of having to decide which type of quotes to suffer through: $ git commit -m 'we haven'\''t seen the last of this "feature"' $ git commit -m "we haven't seen the last of this \"feature\"" You can just write English: $ git commit -m we haven't seen the last of this "feature" And let zsh-autoquoter do the rest. Configure command prefixes that you want to be autoquoted by setting the ZAQ_PREFIXES array in your ~/.zshrc: ZAQ_PREFIXES=('git commit -m' 'note' 'todo') By default this array is empty. You need to opt into autoquote behavior. Note that ZAQ_PREFIXES is an array of exact string prefixes: they are sensitive to whitespace, and zsh-autoquoter has no understanding of commands or flags or other shell syntax. If you have "git commit -m" as a prefix, and you type: $ git commit -a -m hi hello Then zsh-autoquoter will not fire, even though you probably want it to. A future version of zsh-autoquoter might have an option to parse commands more intelligently, but it currently does not. Special characters zsh-autoquoter runs before your shell has a chance to expand or glob or parse the command you typed, so it works to quote any shell syntax: $ git commit -m * globs and | pipes and > redirects and # comments oh my There is one exception: Double escaping zsh-autoquoter won't add quotes if there already are quotes, so you can still type: $ git
(read more)
tl;dr: Lockfiles often protect you from malicious new versions of dependencies. When something bad happens, they empower you to know exactly which systems were affected and when, which is critical during incident response. This posts discusses "why lockfiles" and the details of setting them up properly across ~9 different package managers. It's wonderful to write a few lines of code and then shortcut the next million lines by depending on code written by thousands of other developers. But there is a cost: trusting thousands of other developers. Sometimes this goes wrong. The security implications of this trust are generally known as "supply chain security." And since the median seniority of a developer is dropping as the number of new developers grows, future developers will be reusing more code written by increasingly junior developers. A prerequisite to having a handle on supply chain problems are Lockfiles, which reduce the surface area of dependency code by specifying exact dependency versions and content. A quick outline of this post: What is a lockfile? Why are lockfiles critical for supply chain security? What are the arguments against lockfiles? What languages/package managers support lockfiles? I'm convinced! How do I get started using lockfiles? What is a lockfile? Dependency manifest Before explaining lockfiles, let's look at the dependency manifest. Most package managers have a manifest file that specifies dependencies–usually a tuple of (package, version or range). Often it allows specifying a range of acceptable versions, typically using a Semver expression. You've probably seen this file (package.json, requirements.txt, pom.xml) before, but here's an example snippet from a Python Pipfile manifest: [packages] click = "~=8.0.1" It includes a package (click) and a version range that is considered acceptable (any patch on version 8.0 above 8.0.1). Lockfile The lockfile is a "compiled" version of a dependency manifest. It specifies the exact version of every dependency installed. A good lockfile format recursively specifies all dependencies of dependencies. Some lockfiles also specify the set of allowed SHA hashes for the dependency binary or source (see later in the post for which lockfiles support this extra level of specificity). For example, in Python, the corresponding lockfile entry in Pipfile.lock might look like: "click": { "hashes": ["sha256:353f466495adaeb40b6b5f592f9f91cb22372351c84caeb068132442a4518ef3", "sha256:410e932b050f5eed773c4cda94de75971c89cdb3155a72a0831139a79e5ecb5b"], "index": "pypi", "version": "==8.0.3" }, Why are lockfiles critical for supply chain security? The most fundamental question for a supply chain is: what's in it? If you can't answer the question of what code you depend on, you can’t reason about risk inside it. Without a lockfile, you don't know: Which versions of a dependency were actually installed Where they were installed At what time a dependency version or content changed Will knowing these prevent you from getting hacked via a dependency? The preventative angle is limited, but there is one small benefit: just trust on first use (TOFU). You ar
(read more)
CryptoLyzer is a multiprotocol cryptographic settings analyzer with SSL/TLS, SSH, and HTTP header analysis ability. The main purpose of the tool is to tell you what kind of cryptographic related settings are enabled on a client or server. If you are not interested in the principles behind the project, but the practice, feel free to skip the next section and jump to the Practice section. Rationale There are many notable open-source projects (SSLyze, CipherScan, testssl.sh, tls-scan, …) and several SaaS solutions (CryptCheck, CypherCraft, Hardenize, ImmuniWeb, Mozilla Observatory, SSL Labs, …) to do a security setting analysis, especially when we are talking about TLS, which is the most common and popular cryptographic protocol. However, most of these tools heavily depend on one or more versions of one or more cryptographic protocol libraries, like GnuTLS, OpenSSL, or wolfSSL. But why is this such a problem? The minor problem is the dependency easily stucks them in SSL/TLS/DTLS as other cryptographic protocols (eg: IPSec VPN, Kerberos, OpenVPN, SSH, …) can not be implemented directly by these libraries. Supporting them by the analyzer application needs extra effort. Anyway, most of the analysis of the cryptographic setting does not require any cryptography because before the parties could agree on the cryptographic algorithms they use plain text. The major problem is the fact that analysis should test special and corner cases of the protocol that are intentionally triggered. It is hard to do that with a cryptographic protocol library, which was designed for production not for penetration testing or settings analysis. During an analysis, connections are tried to be established with hardly supported, experimental, obsoleted, or even deprecated mechanisms or algorithms to identify which ones are supported by the given client or server implementation. These mechanisms and algorithms may or may not be supported by the latest or any version of any cryptographic protocol implementations. That is why most of the existing tools require special build(s) of the dependent library where all the protocol versions and algorithms of the good old days are reenabled to make the chance to set up these libraries to offer them as clients or as servers. But what if we want to test an algorithm or a mechanism that has never been implemented by the dependent cryptographic library? It is not just a theory. A special fork of OpenSSL, maintained by Pluralsight author Peter Mosmans, aims to have as many ciphers as possible. This fork is used and recommended by Mozilla Cipherscan, however, it can offer less than two hundred cipher suites, but there are more than three hundred in the different RFCs according to Cipher Suite Info. The majority of them are weak or insecure, which makes it particularly important to be part of the analysis. In addition, it is also true that there are cipher suites that are not on the Ciphe
(read more)
Try it out: STL to ASCII Generator UPDATE: Huge thanks to Oskitone for their help in adding a text export option! The STL to ASCII Generator is a lightweight and easy way to convert an STL file (3D model) into an ASCII image. Just upload your STL file and select a character set to generate the image. You can enter your own custom text to change the characters used in the ASCII image, and reset them to the default character set: ' .:-+*=%@#' Future Development: Copy ASCII image to clipboard Change lighting orientation Add screenshot on mobile Find this project useful? You can buy me a coffee on Ko-Fi!
(read more)
This article is an introduction to the Command Line Interface (cli) in general on unixy machines, like macOS or linux. Target audience is (complete) beginner about the shell, but not programming. Per
(read more)
Custom OpenWRT build for Speedify An easy to use OpenWRT based flavor for Speedify with no CLI requirment. Typical use case: Deploy Speedify to your home network in minutes with few clicks and can be setup from a smartphone. Targets: Raspberry Pi 4 and Generic x86_64 (+VM) Most modules were enabled, check build.sh in devconfigs. 2x USB ethernet adapters and 2x tethering interfaces are automatically configured for quick plug and play on first boot (DHCP) Use the discussions tab in Github for a forum-like discussion on networking configurations, and issues tab for SmoothWAN specifics. Interactive discussion server: https://discord.gg/AxSSjpgwjx Why Speedify? - SDWAN-esque: Having
(read more)
I'm pleased to announce the MVP release of the NAppGUI binding for Oberon+. NAppGUI is a great, lean cross-platform OS abstraction and GUI library written and very well documented by Francisco García Collado. It perfectly matches Wirth's philosophy of simplicity, which he had already based his programming language Oberon on. Francisco writes "I started working on this project [...] in mid 2008 when I was finishing my Computer Engineering studies at the University of Alicante. I wanted to develop a physical systems simulator that worked both on PC-Windows computers and Apple iMac without having to duplicate all the work. The technological alternatives of the time, such as GTK or Qt, did not convince me at all as they were too heavy, complicated to use and slow so they would end up tarnishing the quality, elegance and commitment that I was putting into my mathematical calculation algorithms. [...] In the middle of 2015, I began to think about the fact that all the technical effort made during these years is enough to become a product by itself. It was then that I created the NAppGUI project and started to migrate all the iMech libraries devoted to multiplatform development. [...] On September 31, 2019, I upload the first public version of NAppGUI." And as we know, in September 2021 he generously made the source code available to the public under a very liberal license. Oberon+ uses a shared library version of NAppGUI via its FFI language. I converted the NAppGUI C header files using the C preprocessor and my C2OBX tool and then manually adjusted the generated external library modules as required when I migrated some of the NAppGUI demo applications to Oberon+. Only a few extensions in the Oberon+ language were needed, and thanks to the FFI language the code still looks quite similar to the examples in the NAppGUI documentation, which allows their reuse. Here is all you need to start: https://github.com/rochus-keller/Oberon/tree/master/testcases/NAppGUI; the directory contains the NAppCore, NAppDraw and NAppWidgets external library modules and some examples; it's easiest to begin with an example, e.g. Fractals. The examples are included with the pre-compiled versions of the Oberon IDE (see below). Here is a screenshot of the Fractals application running in the IDE: Note that you can run the exact same application either with the compiled CLI bytecode under Mono or with the C99 transpiled code with any compatible C compiler on any platform supported by NAppGUI! Here are the pre-compiled versions of the IDE with included NAppGUI shared library version for each platform (download, unpack/mount and run, no installation requried): http://software.rochus-keller.ch/OberonIDE_win32.zip http://software.rochus-keller.ch/OberonIDE_macOS_x64.dmg http://software.rochus-keller.ch/OberonIDE_linux_i386.tar.gz A copy of these packages is also attached to this post, see below. See also https://github.com/rochus-keller/Oberon/blob/master/README.md and http://oberon-lang.ch.
(read more)
Heading to a friend’s house with the family (& dogs) to generally cause havoc relax and catch up. No doubt will involve board/card games and tiring out the dogs in the local country park. Reached the point my new MacBook Pro is mostly setup through nix(/brew/asdf), so now I want to tidy up and make my config more modular to roll out the same settings on other machines (Mac/NixOS based). Suspect server deploys will involve deploy-rs. Copy/pasting the same 550 line .nix file between all my machines is probably a bad idea 😂
(read more)
Normal Text The five boxing wizards jump quickly.Large Text The five boxing wizards jump quickly.Graphical Objects and User Interface Components WCAG AA:   image/svg+xml Explanation Enter a foreground and background color in RGB hexadecimal format (e.g., #FD3 or #F7DA39) or choose a color using the color picker. The Lightness slider can be used to adjust the selected color. WCAG 2.0 level AA requires a contrast ratio of at least 4.5:1 for normal text and 3:1 for large text. WCAG 2.1 requires a co
(read more)
This guide shows you how to implement drag and drop in Qml including how to reorder the backing C++ (QAbstractListModel derived) data model. Most QML Drag and Drop examples you find online, including the Qt official example, use a ListModel in the same Qml file which has the data, but no example I found actually reordered a C++ model. This example has a simple MVVM (model-view-viewmodel) C++ structure and a QML file with a drag and drop grid. The dragable example items come from the C++ model, which is derived from QAbstractListModel. This guide assumes you're familiair with Qml and have read through the Drag and DropArea documentation and the official drag and drop example. Drag and Drop in Qml Qml has the concept of drag and drop built in, you define a DropArea somewhere and make something Drag-able, that's basically it. Really neat and quick to set up, including official examples The second official example shows a grid of tiles which you can reorder by dragging and dropping them. It uses a DelegateModel, a special Qml repeater-like control which has both the model and the delegate, to move a delegate item to the position of another item it is dragged over. The example also states, quite clearly: The GridView Example adds drag and drop to a GridView, allowing you to visually reorder the delegates without changing the underlying ListModel. In my case, also changing the underlying listmodel (and backing C++ model) is exactly what I want to do. It turned out to be a bit convoluted, due to how a DelegateModel acts as a proxy, it has a ListModel which you can manipulate, but that is more like a copy of the original model:. You must explicitly propagate changes back to your C++ code. Basic MVVM Qt setup A recording of the demo application The example application follows an MVVM-like pattern. It has a C++ class named Thingie. In this case a Thingie has two properties, a name and color, but imagine it to be a more complex class, maybe an image of some kind. There is a ThingieListModel, your basic Qt QAbstractListModel derived list, with a backing QList<Thingie> and one extra special method (move). Finally there is a ThingieModel, the class that houses all business logic. In an actual MVVM application there would also be a ViewModel, but for this example that would be too much. The ThingieModel is exposed to QML and constructs the list of Thingies, which is also exposed to Qml as a property, via the model. You can find the code here on my Github, but for the sake of convinience, the code is also at the bottom of this article. QML Drag & Drop My example has a grid of squares that you can drag and drop to re-order. The grid is in a sepeate file named ThingGrid and houses a GridView with a DelegateModel. The delegate of this model is another control, a ThingTile. This ThingTile has most of the Drag logic (rectangle with mousearea) and the tiles on the ThingGrid have most of the Drop logic (DropArea). Inside the ThingTile you define your own element, which in the case of the example is a Text, but could be anything. Where my example differs from the Qt example is that my code has an explicit MouseArea in the dragable ti
(read more)
I’m in the unfortunate circumstance to be using a mandatory proxy these days (including SSL) and unlike with browsers where it’s kind of fire and forget, if you’re developing software there’s a plethora of tools that will or will not accept the default environment variables, so here’s a list of stuff and how to fix it. The usual proxy variables: PROXY="http://proxy.local:8080" export http_proxy="$PROXY" export HTTP_PROXY="$PROXY" export https_proxy="$PROXY" export HTTPS_PROXY="$PROXY" export no_proxy='localhost,127.0.0.1,*.localstuff.example.org' export NO_PROXY='localhost,127.0.0.1,*.localstuff.example.org' Interestingly the internet seems to agree or disagree if it’s the uppercase version or the lowercase version. I think there’s no harm in setting all of them and just not thinking about it anymore. Fortunately this solves the issues for all tools and package managers that use curl under the hood. I’ve since configured these additional ones: MY_CA_CERT=/foo/my-cert.crt # nix-pkgs export NIX_SSL_CERT_FILE="$MY_CA_CERT" Although on my current machine I actually have it set to /etc/ssl/certs/ca-certificates.crt Aside, on Ubuntu you can trust your org’s CA cert like this: $ sudo cp MyOrgCA.crt /usr/local/share/ca-certificates/MyOrgCA.crt $ sudo update-ca-certificates So this week I tried to install the Phoenix framework and that was a journey. Apparently kerl and kiex work with curl, so that was no problem. The fun started with mix where I think it’s not documented properly, or at least their docs aren’t ranking high enough, so I first arrived at export HEX_UNSAFE_HTTPS=1, which is a bad idea, so don’t do that. The actual solution seems to be: export HEX_CACERTS_PATH="$MY_CA_CERT" But then the next riddle came up, mix phx.server in Phoenix’s hello world example seemed to be downloading stuff from the npm registry. I mean, it kinda makes sense to have some JS dependencies for a web project, but it was still a bit weird. Asking in #elixir on IRC gave me the answer though that this was an esbuild watcher that was being started, probably to minify some assets or whatever. OK, esbuild, that’s nodejs you might think, there’s a variable for that: export NODE_EXTRA_CA_CERTS="$MY_CA_CERT" Just that it didn’t help, for whatever reason. I didn’t feel like debugging why it didn’t pick up the variable if there was another way. As I am writing this, there’s still an open issue in this esbuild module for Phoenix, #31. I used the workaround described there, installing esbuild by hand and then doing # I do not like install -g npm install esbuild export MIX_ESBUILD_PATH=$(readlink -f node_modules/.bin/esbuild) but then you need to add this to config/config.exs: config :esbuild, version: "0.14.0", path: System.get_env("MIX_ESBUILD_PATH") But it worked, so it’s fine. Oh, and of course Docker also doesn’t work properly behind such a proxy, so I did this, although there should be other solutions: $ cat /etc/systemd/system/docker.service.d/http-proxy.conf [Service] Environment="HTTP_PROXY=http://proxy.local:8080" Environment="HTTPS_PROXY=http://proxy.local:8080" Environment="NO_PRO
(read more)
Happy birthday! It's 10 years since the launch of DynamoDB, Amazon's fast, scalable, NoSQL database. Back when DynamoDB launched, I was leading the team rethinking the control plane of EBS. At the time, we had a large number of manually-administered MySQL replication trees, which were giving us a lot of operational pain. Writes went to a single primary, and reads came from replicas, with lots of eventual consistency and weird anomalies in the mix. Our code, based on an in-house framework, was also hard to work with. We weren't happy with our operational performance, or our ability to deliver features and improvements. Something had to change. We thought a lot about how to use MySQL better, and in the end settled on ditching it entirely. We rebuilt the whole thing, from the ground up, using DynamoDB. At the time my main attraction to DynamoDB was somebody else gets paged for this, with a side order of it's fast and consistent. DynamoDB turned out to be the right choice, but not only for those reasons. To understand the real value of DynamoDB, I needed to think more deeply about one of the reasons the existing system was painful. It wasn't just the busywork of DB operations, and it wasn't just the eventual consistency. The biggest pain point was behavior under load. A little bit of unexpected traffic and things went downhill fast. Like this: Our system had two stable modes (see my posts on metastability and on cache behavior): one where it was ticking along nicely, and one where it had collapsed under load and wasn't able to make progress. That collapsing under load was primarily driven by the database itself, with buffer/cache thrashing and IO contention the biggest drivers, but that wasn't the real cause. The real cause was that we couldn't reject work well enough to avoid entering that mode. Once we knew - based on queue lengths or latency or other output signals - the badness had already started. The unexpectedly expensive work had already been let in, and the queries were already running. Sometimes cancelling queries helped. Sometimes failing over helped. But it was always a pain. Moving to DynamoDB fixed this for us in two ways. One is that DynamoDB is great at rejecting work. When a table gets too busy you don't get weird long latencies or lock contention or IO thrashing, you get a nice clean HTTP response. The net effect of DynamoDB's ability to reject excess load (based on per-table settings) is that the offered load/goodput graph has a nice flat "top" instead of going up and then sharply down. That's great, because it gives systems more time to react to excess load before tipping into overload. Rejections are a clear leading signal of excess load. More useful than that is another property of DynamoDB's API: each call to the database does a clear, well-defined unit of work. Get these things. Scan these items. Write these things. There's never anything open-ended about the work that you ask it to do. That's quite unlike SQL, where a single SELECT or JOIN can do a great deal of work, depending on things like index selection, cache occupancy, key distribution, and the skill of the query optimizer. Most crucially, though, the am
(read more)
Moritz Systems have been contracted by the FreeBSD Foundation to continue our work on modernizing the LLDB debugger’s support for FreeBSD. The primary goal of our contract is to bring kernel debugging into LLDB. The complete Project Schedule is divided into six milestones, each taking approximately one month: Improve LLDB compatibility with the GDB protocol: fix LLDB implementation errors, implement missing packets, except registers. Improve LLDB compatibility with the GDB protocol: support gdb-style flexible register API. Support for debugging via serial port. libkvm-portable and support for debugging kernel core files in LLDB, on amd64 + arm64 platform. Support for other platfor
(read more)
We go to museums to be moved. We are stunned by artworks of steel, glass or
(read more)
Moritz Systems have been contracted by the FreeBSD Foundation to continue our work on modernizing the LLDB debugger’s support for FreeBSD. The primary goal of our contract is to bring kernel debugging into LLDB. The complete Project Schedule is divided into six milestones, each taking approximately one month: Improve LLDB compatibility with the GDB protocol: fix LLDB implementation errors, implement missing packets, except registers. Improve LLDB compatibility with the GDB protocol: support gdb-style flexible register API. Support for debugging via serial port. libkvm-portable and support for debugging kernel core files in LLDB, on amd64 + arm64 platform. Support for other platfor
(read more)
This workshop provides the fundamentals of reversing engineering (RE) Windows malware using a hands-on experience with RE tools and techniques. You will be introduced to RE terms and
(read more)
More than once I came across a story of a heroic MicroPro programmer who in an all-night session managed to port WordStar from CP/M to DOS by patching a single byte. This is how the legend was retold by Joel Spolsky: Now, here’s a little known fact: even DOS 1.0 was designed with a CP/M backwards compatibility mode built in. Not only did it have its own spiffy new programming interface, known to hard core programmers as INT 21, but it fully supported the old CP/M programming interface. It could almost run CP/M software. In fact,  WordStar was ported to DOS by changing one single byte in the code. (Real Programmers can tell you what that byte was, I’ve long since forgotten).Joel Spolsky, May 24, 2000 Now, that story is slightly misleading. The “spiffy new programming interface” accessible through INT 21h pretty much was the CP/M programming interface, and it wasn’t until DOS 2.0 that the INT 21h interface was significantly enhanced. But the gist of the story does not even make sense. Although DOS was designed to make porting from CP/M easy, it was never a question of patching a byte here or there, since CP/M ran on 8080 CPUs and DOS ran on 8086/8088 processors. The processor families are certainly related, but not at all binary compatible. 8080 assembly source code could be machine translated to 8086 source and reassembled, but the code quality was reportedly less than ideal. And yet… there is a kernel of truth in the story, even though it morphed into something highly implausible. Not unlike there really are Wang word processor symbols in the IBM PC character set, even though the stories told by Bill Gates are very difficult to take seriously. The other day I tried to understand the very interesting disk format of Victor 9000 (aka Sirius S1) machines, and a Supplementary Technical Reference Manual from 1984 proved very helpful. Therein I found the following section that is short enough to quote in full: 6.4 How to turn a CP/M version of WordStar 3.21 into an MS-DOS version using DDT86 DDT86 DDT86 1.1 -RWS.CMD START END 03C0:0000 03C0:52FF -S0324 03C0:0324 E9 90 03C0:0325 39 90 03C0:0326 00 C3 03C0:0327 E9 90 03
(read more)
Powered by Chat Room Want to chat with other members of the TypeScriptCompiler community? abstract class Department { constructor(public name: string) {} printName(): void { print("Department name: " + this.name); } abstract printMeeting(): void; // must be implemented in derived classes } class AccountingDepartment extends Department { constructor() { super("Accounting and Auditing"); // constructors in derived classes must call super() } printMeeting(): void { print("The Accounting Department meets each Monday at 10am."); } generateReports(): void { print("Generating accounting reports..."); } } function main() { let department: Department; // ok to create a reference to an abstract type department = new AccountingDepartment(); // ok to create and assign a non-abstract subclass department.printName(); department.printMeeting(); //department.generateReports(); // error: department is not of type AccountingDepartment, cannot access generateReports } Run tsc --emit=jit example.ts Result Department name: Accounting and Auditing The Accounting Department meets each Monday at 10am. Compile as JIT File hello.ts function main() { print("Hello World!"); } Result
(read more)
In my earlier post about the garbage collector, I lied a little bit about the data representation that CHICKEN uses. At the end of the post I briefly mentioned how CHICKEN really stores objects. If you want to fully understand the way CHICKEN works, it is important to have a good grasp on how it stores data internally. Basic idea CHICKEN attempts to store data in the most "native" way it can. Even though it's written in C, it tries hard to use machine words everywhere. So on a 32-bit machine, the native code that's eventually generated will use 32-bit wide integers and pointers. On a 64-bit machine it will use 64-bit wide integers and pointers. This is known as a C_word, which is usually defined as an int or a long, depending on the platform. By the way, the C_ prefix stands for CHICKEN, not the C language. Every Scheme value is represented as a C_word internally. To understand how this can work, you need to know that there are roughly two kinds of objects. Immediate values First, there are the immediate values. These are the typical "atomic" values that come up a lot in computations. It is important to represent these as efficiently as possible, so they are packed directly in a C_word. This includes booleans, the empty list, small integers (these are called fixnums), characters and a few other special values. Because these values are represented directly by a C_word, they can be compared in one instruction: eq? in Scheme. These values do not need to be heap-allocated: they fit directly in a register, and can be passed around "by value" in C. This also means they don't need to be tracked by the garbage collector! At a high enough level, these values simply look like this: This doesn't really show anything, does it? Well, bear with me... Block objects The other kind of value is the block object. This is a value that is represented as a pointer to a structure that contains a header and a variable-length data block. The data block is a pointer which can conceptually be one of two types. In case of a string or srfi-4 object, the data block is simply an opaque "blob" or byte-vector. In most other cases, the block is a compound value consisting of ot
(read more)
Fast Forward Computer Vision: train models at a fraction of the cost with accelerated data loading! [install] [quickstart] [features] [docs] [support slack] [homepage] Maintainers: Guillaume Leclerc, Andrew Ilyas and Logan Engstrom ffcv is a drop-in data loading system that dramatically increases data throughput in model training: Train an ImageNet model on one GPU in 35 minutes (98¢/model on AWS) Train a CIFAR-10 model on one GPU in 36 seconds (2¢/model on AWS) Train a $YOUR_DATASET model $REALLY_FAST (for $WAY_LESS) Keep your training algorithm the same, just replace the data loader! Look at these speedups: ffcv also comes prepacked with fast, simple code for standard vision benchmarks: Installation conda create -y -n ffcv python=3.9 cupy pkg-config compilers libjpeg-turbo opencv pytorch torchvision cudatoolkit=11.3 numba -c pytorch -c conda-forge conda activate ffcv pip install ffcv Troubleshooting note: if the above commands result in a package conflict error, try running conda config --env --set channel_priority flexible in the environment and rerunning the installation command. Citation If you use FFCV, please cite it as: @misc{leclerc2022ffcv, author = {Guillaume Leclerc and Andrew Ilyas and Logan Engstrom and Sung Min Park and Hadi Salman and Aleksander Madry}, title = {ffcv}, year = {2022}, howpublished = {\url{https://github.com/libffcv/ffcv/}}, note = {commit xxxxxxx} } (Make sure to replace xxxxxxx above with the hash of the commit used!) Quickstart Accelerate any learning system with ffcv. First, convert your dataset into ffcv format (ffcv converts both indexed PyTorch datasets and WebDatasets): from ffcv.writer import DatasetWriter from ffcv.fields import RGBImageField, IntField # Your dataset (`torch.utils.data.Dataset`) of (image, label) pairs my_dataset = make_my_dataset() write_path = '/output/path/for/converted/ds.beton' # Pass a type for each data field writer = DatasetWriter(write_path, { # Tune options to optimize dataset size, throughput at train-time 'image': RGBImageField(max_resolution=256, jpeg_quality=jpeg_quality), 'label': IntField() }) # Write dataset writer.from_indexed_
(read more)
I built a new keyboard recently. It looks like this: It has some interesting features: It’s entirely wireless (the left half speaks Bluetooth to the right half, and the right half speaks Bluetooth to my computer). The bottom of the case is tapped for standard 1/4-20 tripod-mounting threads. The bottom of the case also has some very strong rare-earth magnets. The tripod mounts mean that I can “tilt” and “tent” the keyboard however I want, with some basic camera mounting gear: Even at pretty extreme angles: And since it’s wireless, I can mount it to the arms of my chair, and I don’t have to worry about a TRRS cable locking me in: (My particular chair has tiny T-Rex armrests, so this isn’t really a thing I would ever do, but I still can do it.) Meanwhile, the integrate
(read more)
In this tutorial we'll use the Rust programming language to code a tiny game engine. Our game engine will respond to key presses, draw rectangles, and define a structure that could accomodate a larger engine. Our engine will use no code other than Rust's standard library and the APIs the browser provides us. It will compile near-instantly (less than a second on my computer) and be about 130 lines of code total. If you aren't familiar with Rust it's a relatively new programming language that runs fast and helps you write better code. This tutorial includes some Javascript and web code, but the general ideas apply to non-web as well. To follow along you should already know the basics of programming. This tutorial is written to be legible for a Rust beginner and skimmable for a Rust
(read more)
A small, quick-starting, native Clojure interpreter. It is built on top of the Clojure JVM runtime, but the parts that need dynamic class loading have been reimplemented in Clojure so that it could be compiled into a native application. Features Starts quickly (it is compiled with GraalVM native-image) Small (≪1K SLOC) Out of the Box core.async support and also many other core libraries Usage Download the binary from the Release page and run the uclj command: call uclj without parameters to get a REPL call uclj filename.clj to load a file call uclj filename.clj --test to load a file and then run all test cases in it call uclj '(...)' to evaluate a Clojure expression. (It must start with a ( character.) Build You can also build the binary yourself. You will need Leiningen and GraalVM to build the application. Set the GRAALVM_HOME environment variable and run the build-graal.sh script. Benchmarks The author's Advent Of Code 2021 Clojure solutions are used for benchmarking. The values are runtimes in mean + standard deviation format in milliseconds, the smaller is the better. See benchmark.clj for details. test case uclj bb v0.7.3 clojure 1.10.3 test case uclj
(read more)
(Part 1 → Part 2 → Part 3)In the previous two parts, I described how working on Ruby’s changelog made me imagine I understand language’s logic and intentions behind it. Then, that fantasy brought me to participate more insistently in language development. And then, that participation made me suffer when several aspects of Ruby 2.7 evolution hasn’t developed the way I expected—and in one case, an important feature was reverted a couple of months before the final release.I was devastated. Probably it all wouldn’t bear that much significance for me, would I consider the programming language as just a bag of convenience features thrown together: one feature less, one feature more, one feature doesn’t look like others, who cares?What always amazed me in Ruby is the feeling of a very small core of informal rules—let’s say intuitions—that everything else followed. You could’ve uncovered behaviors without explicitly looking for them in the docs, just by assuming that what’s intuitively right would work. And a lot of my work in the Ruby community was dedicated to those intuitions: sharing them with others in my roles of a mentor and senior/principal developer; documenting them; and—yes—trying to push them further, to make small parts of the language as short and clear as they intuitively should’ve been.Spoiler alert: this year, a small(ish) book of mine, called—you guessed!—”Ruby Intuitions” is in development. It is still in the early stages, but I really hope I can lift it off.That’s why I was sad about what happened. It didn’t feel like a rejection of “just some feature I liked” (I had plenty of such rejections and was totally OK w
(read more)
I had an "oh, duh, of course" moment a few weeks ago that I wanted to share: is WebAssembly the next Kubernetes?katers gonna k8sKubernetes promises a software virtualization substrate that allows you to solve a number of problems at the same time: Compared to running services on bare metal, Kubernetes ("k8s") lets you use hardware more efficiently. K8s lets you run many containers on one hardware server, and lets you just add more servers to your cluster as you need them. The "cloud of containers" architecture efficiently divides up the work of building server-side applications. Your database team can ship database containers, your backend team ships java containers, and your product managers wire them all together using networking as the generic middle-layer. It cuts with the grain of Conway's law: the software looks like the org chart. The container abstraction is generic enough to support lots of different kinds of services. Go, Java, C++, whatever -- it's not language-specific. Your dev teams can use what they like. The operations team responsible for the k8s servers that run containers don't have to trust the containers that they run. There is some sandboxing and security built-in. K8s itself is an evolution on a previous architecture, OpenStack. OpenStack had each container be a full virtual machine, with a whole kernel and operating system and everything. K8s instead generally uses containers, which don't generally require a kernel in the containers. The resul
(read more)
Recreating Heroku's Push-to-Deploy with AWS After choosing the tech stack for DM, the next step was to figure out hosting. A standard bit of wisdom for new web startups is to use Heroku instead of building and maintaining your own infrastructure. The argument is that, despite the expensive cost relative to other hosting solutions, Heroku will still be substantially cheaper than the time you (or someone you hire) spend managing infrastructure. Choosing Heroku is a polarizing decision. There are claims that Heroku is too expensive, too buggy, or no longer innovative. Instead, our choice was abou
(read more)
oss-sec mailing list archives Linux kernel: Heap buffer overflow in fs_context.c since version 5.1 From: Will Date: Tue, 18 Jan 2022 18:21:30 +0000 There is a heap overflow bug in legacy_parse_param in which the length of data copied can be incremented beyond the width of the 1-page slab allocated for it. We currently have created functional LPE exploits against Ubuntu 20.04 and container escape exploits against Google's hardened COS. The bug was introduced in 5.1-rc1 (https://github.com/torvalds/linux/commit/3e1aeb00e6d132efc151dacc062b38269bc9eccc#diff-c4a9ea
(read more)
I mostly use property-based testing to test stateless functional code. A technique I love to use is to pair property-based tests together with example-based tests (that is, “normal” tests) in order to have some tests that check real input. Let’s dive deeper into this technique, some contrived blog-post-adequate examples, and links to real-world examples. Photo by David Pennington on Unsplash I’ve been a vocal advocate of property-based testing for a while. I wrote stream_data, a property-based testing framework for Elixir, gave talks about the topic, and used property-based t
(read more)
This gem lists GitHub repositories using end-of-life Ruby versions. Installation Usage Set up a GitHub access token; Export the GITHUB_TOKEN environment variable or set it when calling end_of_life; Use the end_of_life command to list the repositories: $ GITHUB_TOKEN=something end_of_life # if your platform supports symlinks, you can use the `eol` command instead [✔] Fetching repositories... [✔] Searching for EOL Ruby in repositories... Found 2 repositories using EOL Ruby (<= 2.5.9): ┌───┬──────────────────────────────────────────────┬──────────────┐ │ │ Repository │ Ruby version │ ├─
(read more)
19 Jan 2022 In the first post of the current series, I talked about Swift Package Manager basics and how we can maintain the project with many Swift modules. This week we continue the topic of Microapps architecture by introducing feature modules. Last week we created a separate module for the design system of our app that contains buttons and other shared UI components. We call them foundation modules because we will import them into many different modules and use their functionality. Another excellent example of the foundation module is the networking layer. We can also extract it into a separate module and import it whenever needed. In the current post, I want to focus on the feature modules. Feature module provides complete functionality for a dedicated app feature.
(read more)
This is a cross-post of the official security advisory. The official advisory contains a signed version with our PGP key, as well. The Rust Security Response WG was notified that the std::fs::remove_dir_all standard library function is vulnerable a race condition enabling symlink following (CWE-363). An attacker could use this security issue to trick a privileged program into deleting files and directories the attacker couldn't otherwise access or delete. This issue has been assigned CVE-2022-21658. Overview Let's suppose an attacker obtained unprivileged access to a system and needed to delete a system directory called sensitive/, but they didn't have the permissions to do so. If std::fs::remove_dir_all followed symbolic links, they could find a privileged program that removes a
(read more)
Date: 3 April 2012 Tags:python, computing, document-processing I give some advice each year in my annual Sphinx tutorial at PyCon. A grateful student asked where I myself had learned the tip. I have done some archæology and finally have an answer. Let me share what I teach them about “semantic linefeeds,” then I will reveal its source — which turns out to have been written when I was only a few months old! In the tutorial, I ask students whether or not the Sphinx text files in their project will be read by end-users. If not, then I encourage students to treat the files as private “source code” that they are free to format semantically. Instead of fussing with the lines of each paragraph so that they all end near the right margin, they can add linefeeds anywhere that
(read more)
PIs: Robert N. M. Watson (University of Cambridge), Simon W. Moore (University of Cambridge), Peter Sewell (University of Cambridge), and Peter Neumann (SRI International) January 2022: Arm has shipped its CHERI-enabled Morello prototype processor, SoC, and board! Read blog posts about this at Arm and Microsoft, and our own thoughts at Cambridge. October 2020: We have posted CHERI ISAv8. This ISA version is synchronized to Arm's Morello architecture, as well as presenting a mature version of our CHERI-RISC-V ISA. September 2019: Learn about the CHERI architecture! Our technical report An Introduction to CHERI is a high-level summary of our work on CHERI architecture, microarchitecture, formal modeling, and software. CHERI (Capabilit
(read more)
There is a difference of opinion among many programmers regarding the idea of replacing C, either in newly wrritten code or else altogether. I find myself coming down on the side that it makes little sense to try to replace a legacy codebase, and still find C useful in some contexts (particularly in the realm of microcontrollers). That said I think a strong case can be made for using one of the several more modern languages that have sprung up in the systems programming space, at least for newly written code. I make no secret that I love Rust and Zig both, for many similar reasons. But I also find Nim to be impressive and have heard great things about Odin. There is, frankly, room for more than one. I'm going to starrt this off with a small example, parsing a
(read more)
Tuesday January 18, 2022 by Ulf Hermann | Comments In my previous post, the history and general architecture of the new Qt Quick Compiler technology was explained. As promised there, the performance numbers are presented in this post. Along with qmlsc we've created a number of benchmarks to measure its impact. Note that classical JavaScript benchmarks like v8bench won't get us very far because those mainly exercise the aspects of JavaScript qmlsc does not intend to optimize. The following results are produced with three modes of operation. All of the me
(read more)
If you've managed multi-user / multi-tenant Kubernetes clusters then there's a good chance you've come across RBAC (Role-Based Access Control). RBAC provides a strong method of providing permissions to users, groups or service accounts withing a cluster. These permissions can either be cluster-wide, with ClusterRole, or namespace scoped, with Role. Roles can be combined together to build up all the rules stating what the associated entity is allowed to perform. These rules are additive, starting from a base of no permissions to do anything, building up what is allowed to be performed and there's no syntax to take away a permission that is granted by another rule. Generally, and by default, operators of the cluster are assigned to the cluster-admin ClusterRole. This gives the use
(read more)
Long time, no posts! Finally wrote up this little project from last year: Repurposing my SHA2017 hacker camp badge into a solar energy and power consumption monitor. Background Lately I've really appreciated systems that show passive information - learning something useful without needing to parse a flashy Dashboard or interact with a touch screen. For example, I like our weather station. It might be ugly but it's always there in the kitchen, quietly showing you the temperature: To experiment more with this kind of interface, I backed the Inkplate 10 programmable e-ink display on CrowdSupply last year. In the meantime, I noticed this neat project from the Netherlands that repurposed the SHA2017 Hacker Camp badge as a CO2 sen
(read more)
We're all familiar with the 3 basic categories of authentication. Knowledge factors (passwords, PINs) Possession factors (a software/hardware token - Yubikey/Google Authenticator/SecureID) Inherence factors (fingerprint, heartbeat, iris/retina scanning) While the vast majority of sites use knowledge factors, a growing number are turning to multi-factor solutions in an effort to bolster security; to the detriment of the user experience. Cue continuous authentication / behavioral biometrics... the process of identifying a user based on the subtle nuances in their voice, typing patterns, facial features and location. How does it work? As opposed to traditional authentication which is only interested in what you type, behavioral biometric systems collect & profile how you type too. By ac
(read more)
9:00PM Obituaries PGe PG Store Archives Classifieds Classified Events Jobs Real Estate Legal Notices Pets MENU SUBSCRIBE LOGIN REGISTER LOG OUT MY PROFILE Home News Local Sports Opinion A&E Life Business Contact Us
(read more)
Wine is a program to run Windows applications on a Unix PC. Running Wine on Windows has been a fever dream of those responding to the siren call of "we do what we must, because we shouldn't" since at least 2004, when someone tried compiling Wine in Cygwin and trashed the registry of the host system. The excuse is "what about ancient applications that don't run properly in recent Windows." But you know the real reason is "I suffered for my art, now it's your turn." In late 2008, I got a bug in my brain and (I think it was me) started the WineOnWindows page on the Wine wiki. Summary: it was bloody impossible as things stood — going via Cygwin, MinGW or Windows Services for Unix. The current page isn't much more successful. Windows 10 introduced Windows Subsystem for Linux — and the c
(read more)
When doing multi-threaded work in Ruby, there are a couple of ways to control the execution flow within a given thread. In this article, I will be looking at Thread#pass and Queue#pop and how understanding each of them can help you drastically optimize your applications. Thread#pass - what it is and how does it work One of the ways you can ask the scheduler to "do something else" is by using the Thread#pass method. Where can you find it? Well, aside from Karafka, for example in one of the most recent additions to ActiveRecord called #load_async (pull request). Let's see how it works and why it may or may not be what you are looking for when building multi-threaded applications. Ruby docs are rather minimalistic with its description: Give the thread scheduler a hint to pass execution to another thread. A running thread may or may not switch, it depends on OS and processor. That means that when dealing with threads, you can tell Ruby that it would not be a bad idea to switch from executing the current one and focusing on others. By default, all the threads you create have the same priority and are treated the same way. An excellent illustration of this is the code below: threads = [] threads = 10.times.map do |i| Thread.new do # Make threads wait for a bit so all threads are created sleep(0.001) until threads.size == 10 start = Time.now.to_f 10_000_000.times do start / rand end puts "Thread #{i},#{Time.now.to_f - start}" end end threads.each(&:join) # for i in {1..1000}; do ruby threads.rb; done > results.txt on average, the computation in each of them took a similar amount of time: The difference in between the fastest an
(read more)
Z3 is a satisfiability modulo theories (SMT) solver developed by Microsoft Research. With a description like that, you’d expect it to be restricted to esoteric corners of the computerized mathematics world, but it has made impressive inroads addressing conventional software engineering needs: analyzing network ACLs and firewalls in Microsoft Azure, for example. Z3 is used to answer otherwise-unanswerable questions like “are these two firewalls equivalent?” or “does this set of network ACLs violate any security rules?”. While those applications dealt with constraints over IP addresses (essentially very large numbers), Z3 can also analyze constraints over strings; this was used to implement AWS Zelkova which analyzes role-based access control (RBAC) policies in the Amazon cloud. Of course, modern RBAC systems go beyond simple string comparison: they also include regular expressions! Z3 can actually handle these too, although at the time of development (pre-2018) AWS Zelkova ran into issues with Z3’s regex module so extended it with their own solver called Z3 Automata. Z3 Automata was sadly never open-sourced, but the following years saw a ton of work put into Z3’s string and regex functionality. So when Teleport approached me to prototype an analysis engine for their own (quite advanced!) RBAC system, it provided an ideal opportunity to take this new hotness for a spin! What questions can we ask about a RBAC system? The most basic is this: are two roles the same? Do they admit the same set of users to the same set of nodes? Here’s how I used Z3 to answer that question, analyzing constraints involving string equalit
(read more)
 If you’ve read Crunchy blogs recently you probably noticed by now that we’re all big fans of indexing. Indexing is key to optimizing your database workloads and reducing query times. Postgres now supports quite a few types of indexes and knowing the basics is a key part of working with Postgres.  The role of database indexes is similar to the index section at the back of a book. A database index stores information on where a data row is located in a table so the database doesn't have to scan the entire table for information. When the database has a query to retrieve, it goes to the index first and then uses that information to retrieve the requested data. Indexes are their own data structures and they’re part of the Postgres data definition language (the DDL). They're stored on disk along with data tables and other objects.  B-tree indexes are the most common type of index and would be the default if you create an index and don’t specify the type. B-tree indexes are great for general purpose indexing on information you frequently query.  BRIN indexes are block range indexes, specially targeted at very large datasets where the data you’re searching is in blocks, like timestamps and date ranges. They are known to be very performant and space efficient. GIST indexes build a search tree inside your database and are most often used for spatial databases and full-text search use cases.  GIN indexes are useful when you have multiple values in a single column which is very common when you’re storing array or json data.  I did all my testing on Crunchy Bridge with a hobby instance, which is very nice for this kind of quick data load and testing work. I have some samples available alongside this post if you want to follow along with the data I used. You can also use Crunchy's learning portal to do an indexing tutorial.  Using Explain Analyze You almost never talk about Postgres indexing without referring to the Explain feature. This is just one of those Postgres Swiss Army knife tools that you need to have in your pocket at all times. Explain analyze will give you information like query plan, execution time, and other useful info for
(read more)
I’ve long struggled with the Technical Debt metaphor. It was immediately useful when I first heard it. I still think it is useful, albeit as a starting point. The more I worked with software, the more infuriatingly incomplete it started to feel. Some years ago I found myself in a rabbit hole, researching the 2008 financial crisis. It reminded me of other insane stories like Knight Capital, and farther back, about how Enron imploded (because Enron India’s meltdown was shocking, and destructive. And because a dear friend, in his past life, was on the team at Lehman Bros. that structured financing for Enron India. So come 2008, when Lehman imploded, I got to hear about the hard-chargin' super-leveraged risk-takin' days from someone who was there in the early days of the so-called Dick Fuld era. It was all very fascinating, but I digress…). One source of my unease is that I think discussions of Technical Debt don’t sufficiently examine the nature of the Risk of the underlying challenge. The other is that the concept skews and pigeonholes the Responsibility part of the underlying challenge. Down in the rabbit hole, a slow realization began. Framing pigeonholes Responsibility. Software debt packages risk. We need better mental models of that risk. Software debt risk percepti
(read more)
When I first start­ed writ­ing Perl in my ear­ly 20’s, I tend­ed to fol­low a lot of the struc­tured pro­gram­ming con­ven­tions I had learned in school through Pascal, espe­cial­ly the notion that every func­tion has a sin­gle point of exit. For example: sub double_even_number { # not using signatures, this is mid-1990's code my $number = shift; if (not $number % 2) { $number *= 2; } return $number; } This could get pret­ty con­vo­lut­ed, espe­cial­ly if I was doing some­thing like val­i­dat­ing mul­ti­ple argu­ments. And at the time I didn’t yet grok how to han­dle excep­tions with eval and die, so I’d end up with code like: sub print_postal_address { # too many arguments, I know my ($name, $street1, $street2, $city, $state, $zip) = @_; # also this notion of addresses is naive and US-centric my $error; if (!$name) { $error = 'no name'; } else { print "$name\n"; if (!$street1) { $error = 'no street'; } else { print "$street1\n"; if ($street2) { print "$street2\n"; } if (!$city) { $error = 'no city'; } else { print "$city, "; if (!$state) { $error = 'no state'; } else { print "$state "; if (!$zip) { $error = 'no ZIP code'; } else { print "$zip\n"; } } } } } return $error; } What a mess. Want to count all those braces to make sure they’re bal­anced? This is some­times called the arrow anti-​pattern, with the arrowhead(s) being the most nest­ed state­ment. The default ProhibitDeepNests perlcritic pol­i­cy is meant to keep you from doing that. The way out (lit­er­al­ly) is guard claus­es: check­ing ear­ly if some­thing is valid and bail
(read more)
Hotcaml is an OCaml interpreter that starts from a source file and loads it dependencies. When one of the source file changes and passes the typechecker, it is reloaded, as well as all its reverse dependencies. To get started, clone the repository and type make. Two frontends are built: hotcaml.exe and hotcaml_lwt.exe. Starting hotcaml An hotcaml invocation takes three kinds of arguments: hotcaml [ -package pkg ]* [ -I path ]* entrypoint.ml* The pkg 's should be valid findlib package names. They will be loaded in order during startup. The path's are paths that will be looked up for dependencies. Finally, the entrypoints are the actual code that we want to interpret. Each entrypoint is loaded and interpreted in order. Dependencies of an entrypoint are looked up in the path's and then in the loaded packages. Once execution of the entrypoints is done, the interpreter will watch the disk for changes. If one of the source file change, it is reloaded and interpretation resumes from this module, followed by all its reverse dependencies. If one of the dependency does not typecheck, reloading is postponed until all errors are solved. Synchronous and asynchronous frontends Contrary to the normal execution of an OCaml program, modules can be loaded and unloaded, multiples times, during the execution. The synchronous hotcaml only look for changes after execution is done. This is not really convenient for interactive programs, where we might want to reload during execution rather than after. hotcaml_lwt provides an asynchronous frontend: lwt threads continue to execute after loading, and modules can be reloaded concurrentlly. Observing reload process The Hotlink module can be used to customize behavior of hot-loaded programs. Hotlink.is_hot_loaded () : bool is true only when called from a module that has been hot-loaded. Hotlink.is_hot_unloaded () : bool is true only when called from a module that was hot-loaded and has now been unloaded. Hotlink.on_unload : (unit -> unit) -> unit allows to register a callback that will be invoked when an hot-loaded module is unloaded. Hotlink.on_unload_or_at_exit : (unit -> unit) -> unit calls the callback either during unloadin
(read more)
This came up at my day job when two programmers were trying to get a block of data to be the size both expected it to be. Consider this example: typedef struct { uint8_t byte1; // 1 uint16_t word1; // 2 uint8_t byte2; // 1 uint16_t word2; // 2 uint8_t byte3; // 1 // 7 bytes } MyStruct1; The above structure represents three 8-bit byte values and two 16-bit word values for a total of 7 bytes. However, if you were to run this code in GCC for Windows, and print the sizeof() that structure, you would see it returns 10
(read more)
Supporting the project Support this project by becoming a sponsor. Your logo will show up here with a link to your website. 🙏 Become a sponsor Introduction CMake wrapper for Xrepo C and C++ package manager. This allows using CMake to build your project, while using Xrepo to manage dependent packages. This project is partially inspired by cmake-conan. Example use cases for this project: Existing CMake projects which want to use Xrepo to manage packages. New projects which have to use CMake, but want to use Xrepo to manage packages. Usage Use package from official repository
(read more)
Many modern production environments are built on top of Docker and Kubernetes. It is common to see READMEs for open source tools offering build options for the docker crowd and sometimes tools only support docker based workflows. This is a natural product of the reality that many developer workflows are built on top of Docker – when you already know a tool, it makes sense to also use it in your other projects.  While there have been efforts to bring Docker to FreeBSD, none of these are really mature. The presence of Docker in so many areas and the lack of Docker support in FreeBSD might make you think that you are out of luck if you want DevOps workflows for managing clusters of computers.  Pot is a jail abstraction framework/management tool that aims to replace Docker in your DevOps tool chest and it has support for using Nomad for orchestration of clustered services. The team behind Pot are aiming to provide modern container infrastructure on top of FreeBSD and have been progressing over the last 3 years to get Pot into production.  The Pot project was started in 2018 with the ambitious goal of taking the best things from Linux container management and creating a new container model based on FreeBSD technologies, running on FreeBSD.  Pot is based on the core proven FreeBSD tools: jails, zfs, VNET and pf and it uses rctl and cpuset to constrain the resources available to each container. These tools are used to manage:  Jail configuration Dataset/Filesystem management Network management Resource limitation  Part of why the success of Docker and similar tools was such a surprise to FreeBSD sysadmins was that FreeBSD’s core tools already made the job of running relatively complex clusters quite straight forward. Pot aims to keep things simple and uses core FreeBSD features to implement functionality when possible. For example, there is no need to invent new functionality to move images between hosts when using zfs snapshots and zfs send | zfs recv already exists and is well understood.  Nomad is a cluster manager and scheduler that provides a common workflow to deploy applications across an infrastructure. A cluster manager handles distributing applications across a set of hosts based on load and cluster usage. Nomad has support for provisioning and managing images of many diff
(read more)
Tasks in Swift are part of the concurrency framework introduced at WWDC 2021. A task allows us to create a concurrent environment from a non-concurrent method, calling methods using async/await. When working with tasks for the first time, you might recognize familiarities between dispatch queues and tasks. Both allow dispatching work on a different thread with a specific priority. Yet, tasks are quite different and make our lives easier by taking away the verbosity from dispatch queues. Read how Nextdoor and Disney+ use Sentry to monitor & improve their application’s performance. Get started with 2 free months using code: SWIFTLEE. If you’re new to async/await, I recommend first reading my article Async await in Swift explained with code examples. How to create and run a Task Creating a basic task in Swift looks as follows: let basicTask = Task { return "This is the result of the task" } As you can see, we’re keeping a reference to our basicTask which returns a string value. We can use the reference to read out the outcome value: let basicTask = Task { return "This is the result of the task" } print(await basicTask.value) // Prints: This is the result of the task This example returns a string but could also have been throwing an error: let basicTask = Task { // .. perform some work throw ExampleError.somethingIsWrong } do { print(try await basicTask.value) } catch { print("Basic task failed with error: \(error)") } // Prints: Basic task failed with error: somethingIsWrong In other words, you can use a task to produce both a value and an error. How do I run a task? Well, the above examples already
(read more)
Python is not the fastest programming language. So when you need to process a large amount of homogeneous data quickly, you’re told to rely on “vectorization.” This leads to more questions: What does “vectorization” actually mean? When does it apply? How does vectorization actually make code faster? To answer that question, we’ll consider interesting performance metrics, learn some useful facts about how CPUs work, and discover that NumPy developers are working hard to make your code faster. What “vectorization” means, and when it applies Let’s say we have a few million numbers in a list or array, and we want to do some mathematical operations on them. Since we know they are all numbers, and if we’re doing the same operation on all of the numbers, we can “vectorize” the operation, i.e. take advantage of this homogeneity of data and operation. This definition is still vague, however. To be more specific there at least three possible meanings, depending on who is talking: API design: A vectorized API is designed to work on homogeneous arrays of data at once, instead of item by item in a for loop. This is orthogonal to performance, though: in theory you might have a fast for loop, or you might have a slow batch API. A batch operation implemented in a fast language: This is a Python-specific meaning, and does have a performance implication. By doing all that work in C or Rust, you can avoid calling into slow Python. An ope
(read more)
As of late I have had some pain points with iocage, which I used since I started using FreeBSD in 2017. I came from an Ubuntu with LXD + ZFS background and iocage had the command line interface I wanted that felt familiar with LXD at the time. Well, iocage seems dead now. Its last release was in 2019 and its last commit (at the time of writing this) was September 30, 2021. Of course, that commit isn’t in what’s in FreeBSD ports, unless you use the devel package..and, that package has some issues (for me, iocage list doesn’t work right). Because of this, I decided to take up the challenge of making my own base jails. To start, I will give credit where credit is due and say I followed these resources to get me to where I am: Michael W Lucas’ FreeBSD Jail Mastery bookhttps://clinta.github.io/freebsd-jails-the-hard-way/https://www.skyforge.at/posts/a-note-in-sysvipc-and-jails-on-freebsd/iocage and their fstab files Creating the Release First off, we need to create a release jail. This is a base image that we can use to make cloned jails, thick jails, or our base jails from. I’m going to start by making a new jails dataset and mounting it at /jails $ zfs create -o mountpoint=/jails zroot/jails Here is the foundation for everything. Now I’ll create a few other datasets for our releases and templates and running jails, as well as our first release dataset (13.0-RELEASE) $ zfs create -p zroot/jails/releases/13.0-RELEASE $ zfs create zroot/jails/templates $ zfs create zroot/jails/jails Next, we need to download the base OS as well as lib32 for our jail. The contents should be extracted into /jails/releases/13.0-RELEASE in the end. $ fetch https://ftp.freebsd.org/pub/FreeBSD/releases/amd64/amd64/13.0-RELEASE/base.txz -o /tmp/base.txz $ fetch https://ftp.freebsd.org/pub/FreeBSD/releases/amd64/amd64/13.0-RELEASE/lib32.txz -o /tmp/lib32.txz $ tar -xf /tmp/base.txz -C /jails/releases/13.0-RELEASE $ tar -xf /tmp/lib32.txz -C /jails/releases/13.0-RELEASE Now let us update the jail contents $ env UNAME_r=13.0-RELEASE freebsd-update -b /jails/releases/13.0-RELEASE fetch install $ env UNAME_r=13.0-RELEASE freebsd-update -b /jails/releases/13.0-RELEASE IDS Then, we can copy our /etc/localtime and our /etc/resolv.conf files into the jail $ cp /etc/localtime /jails/releases/13.0-RELEASE/etc/localtime $ cp /etc/resolv.conf /jails/releases/13.0-RELEASE/etc/resolv.conf Nice. Now we have our base. Lets snapshot it so we can clone it. We will clone this to our templates folder after we take the snapshot: $ zfs snapshot zroot/jails/releases/[email protected] $ zfs clone zroot/jails/releases/[email protected] zroot/jails/templates/base-13.0-RELEASE This part is done. The release is made, now we have a base created for our base jails. Creating Our Skeleton Since we want to be using nullfs mounts for our base jail, we are going to want to make another clone and wipe out the contents of that new clone. Here I think you can debate whether or not you want to take a clone of the base-13.0-RELEASE clone from earlier, or if you want to clone from the release. I opted to clone from the release. Maybe one is a proper
(read more)
Skip to main content This browser is no longer supported. Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support. Article 12/10/2021 2 minutes to read Please rate your experience Feedback will be sent to Microsoft: By pressing the submit button, your feedback will be used to improve Microsoft products and services. Privacy policy. Thank you. In this article Introducing Rust for Windows In the Overview of developing on Windows with Rust topic, we demonstrated a simple app that outputs a Hello, world! message. But not only can you use Rust on Windows, you can also write apps for Windows using Rust. Rust for Windows is the latest language projection for Windows. It's currently in preview form, and you can see it develop from version to version in its change log. Rust for Windows lets you use any Windows API (past, present, and future) directly and seamlessly via the windows crate (crate is Rust's term for a binary or a library, and/or the source code that builds into one). Whether it's timeless functions such as CreateEventW and WaitForSingleObject, powerful graphics engines such as Direct3D, traditional windowing functions such as CreateWindowExW and DispatchMessageW, or more recent user interface (UI) frameworks such as Composition and Xaml, the windows crate has you covered. The win32metadata project aims to provide metadata for Win32 APIs. This metadata describes the API surface—strongly-typed API signatures, parameters, and types. This enables the entire Windows API to be projected in an automated and complete way for consumption by Rust (as well as languages such as C# and C++). Also see Making Win32 APIs more accessible to more languages. As a Rust developer, you'll use Cargo (Rust's package management tool)—along with https://crates.io (the Rust community's crate registry)—to manage the dependencies in your projects. The good news is that you can reference the windows crate from your Rust apps, and then immediately beginning calling Windows APIs. You can also find Rust documentation for the windows crate over on https://docs.rs. Similar to C++/WinRT, Rust for Windows is an open source language projection developed on GitHub. Use the Rust for Windows repo if you have questions about Rust for Windows, or if you wish to report issues with it. The Rust for Windows repo also has some simple examples that you can follow. And there's an excellent sample app in the form of Robert Mikhayelyan's Minesweeper. Contribute to Rust for Windows Rust for Windows welcomes your contributions! Identify and fix bugs in the source code Rust documentation for the Windows API Rust for Window
(read more)
Welcome! I’m Brandon Rhodes (website, Twitter) and this is my evolving guide to design patterns in the Python programming language. This site is letting me collect my ideas about Python and Design Patterns all in one place. My hope is that these pages make the patterns more discoverable — easier to find in web searches, and easier to read — than when they were scattered across the videos and slides of my Python conference talks. The weight of other obligations makes my progress intermittent. To check for new material, simply visit the commit history of this site’s project repository on GitHub, where you can also select “Watch” to get updates. With those preliminaries c
(read more)
Here at KDAB, we recently published a library called KDBindings, which aims to reimplement both Qt signals and slots and data binding in pure C++17. To get an int
(read more)
We felt more like “Oh fuck, Databreach” During the pandemic, grocery delivery services gained popularity. New players on the market offer delivery in under an hour. One of them is Gorillas, which not only delivers apples and granola bars in 10 minutes, but just as quickly delivered the data of all its customers. How could this happen? Unfortunately, it was once again much too simple. But let’s start at the beginning: Gorillas currently is the largest of these services in Germany. On large billbords they promise delivery times of under 10 minutes. Orders are picked in decentralized depots and delivered by riders on bicycles. A few weeks ago, we already stumbled across a security vu
(read more)
Git Town makes Git more efficient, especially for large teams. See this screencast for an introduction and this Softpedia article for an independent r
(read more)