notice: I've disabled signup/login as malformed RSS feeds were costing me loads in cloud bills. Will look at a better way to fix this in future. Contact me on twitter if there's a feed you'd like included in the meantime

After a very long porting journey, Launchpad is finally running on Python 3 across all of our systems. I wanted to take a bit of time to reflect on why my emotional responses to this port differ so much from those of some others who’ve done large ports, such as the Mercurial maintainers. It’s hard to deny that we’ve had to burn a lot of time on this, which I’m sure has had an opportunity cost, and from one point of view it’s essentially running to stand still: there is no single compelling feature that we get solely by porting to Python 3, although it’s clearly a prerequisite for tidying up old compatibility code and being able to use modern language facilities in the future. And yet, on the whole, I found this a rewarding project and enjoyed doing it. Some of this may be because by inclination I’m a maintenance programmer and actually enjoy this sort of thing. My default view tends to be that software version upgrades may be a pain but it’s much better to get that pain over with as soon as you can rather than trying to hold back the tide; you can certainly get involved and try to shape where things end up, but rightly or wrongly I can’t think of many cases when a righteously indignant user base managed to arrange for the old version to be maintained in perpetuity so that they never had to deal with the new thing (OK, maybe Perl 5 counts here). I think a more compelling difference between Launchpad and Mercurial, though, may be that very few other people really had a vested interest in what Python version Launchpad happened to be running, because it’s all server-side code (aside from some client libraries such as launchpadlib, which were ported years ago). As such, we weren’t trying to do this with the internet having Strong Opinions at us. We were doing this because it was obviously the only long-term-maintainable path forward, and in more recent times because some of our library dependencies were starting to drop support for Python 2 and so it was obviously going to become a practical problem for us sooner or later; but if we’d just stayed on Python 2 forever then fundamentally hardly anyone else would really have cared directly, only maybe about some indirect consequences of that. I don’t follow Mercurial development so I may be entirely off-base, but if other people were yelling at me about how late my project was to finish its port, that in itself would make me feel more negatively about the project even if I thought it was a good idea. Having most of the pressure come from ourselves rather than from outside meant that wasn’t an issue for us. I’m somewhat inclined to think of the process as an extreme version of paying down technical debt. Moving from Python 2.7 to 3.5, as we just did, means skipping over multiple language versions in one go, and if similar changes had been made more gradually it would probably have felt a lot more like the typical dependency update treadmill. I appreciate why not everyone might want to think of it this way: maybe this is just my own rationalization. Reflections on porting to Python 3 I’m not going to defend the Python 3 migrat
(read more)
The Heptagon of Configuration is a term I'm coining to describe a pattern I've observed in software configuration, where configuration evolves through specific, increasing levels of flexibility and complexity, before returning the restrictive and simple implementation.How does the Cycle Work?Hardcoded values are the simplest configuration - but provide very little flexibility. The program surface increases, and with it the configuration, incorporating environment variables*, flags, and when that becomes cumbersome, a configuration file to encode the previous.When multiple environments require
(read more)
2021-08-01 4 min read WelcomeLet’s build some stuff. Today on AWS(EKS). Today with popular solutions and almost without coding(in YAML). Today focus on speed and simplicity. Additionally no CI/CD.ComponentsI will keep it simple.Terraformaws clidockerhelmcurldig, kubectl, emacsBuilding infrastructureSetupFirst steps. I need to build some infrastructure. As almost everyone use Terraform for building and managing resources I decided to use it too. Based on Terrafrom EKS module I’ve created a sample manifest. Also, I decided to add ECR and configure output in a specific way.# I'd like to have
(read more)
C++20 added concepts as a language feature. They’re often compared to Haskell’s type classes, Rust’s traits or Swift’s protocols. Yet there is one feature that sets them apart: types model C++ concepts automatically. In Haskell, you need an instance, in Rust, you need an impl, and in Swift, you need an extension. But in C++? In C++, concepts are just fancy boolean predicates that check for well-formed syntax: every type that makes the syntax well-formed passes the predicate and thus models the concepts. This was the correct choice, but is sometimes not what you want. Let’s explore it further. Nominal vs. structural concepts To co-opt terms from type systems, C++20 concepts use structural typing: a type models the concept if it has the same structure as the one required by the concept, i.e. it the has required expressions. On the contrast, type classes, traits and protocols all use nominal typing: a type models the concept only if the user has written a declaration to indicate it. For example, consider a C++ concept that checks for operator== and operator!=: template concept equality_comparable = requires (T obj) { { obj == obj } -> std::same_as; { obj != obj } -> std::same_as; }; This is how you write a type that models equality_comparable with C++20’s structural concepts: // Define your type, struct vec2 { float x, y; // define the required operators, friend bool operator==(vec2 lhs, vec2 rhs) { return lhs.x == rhs.x && lhs.y == rhs.y; } // operator!= not needed in C++20 due to operator rewrite rules! }; // ... and that's it! static_assert(equality_comparable); In contrast, this is how you would write a type that models equality_comparable in a hypothetical C++20 with nominal concepts: // Define your type struct vec2 { … }; // as before // ... and tell the compiler that it should be `equality_comparable`. // Most languages also support a way to define the operation here. concept equality_comparable for vec2; Nominal is better… In my opinion, nominal concepts are superior to structural concepts: Structural concepts do not allow for semantic differences between concepts, because that is not part of the “structure”. Consider the standard library concept std::relation; it is true for predicate t
(read more)
PRECIS (Preparation, Enforcement, and Comparison of Internationalized Strings) is a framework for consistent and secure management of Unicode strings in web applications. If you haven’t read my previous article Input validation of free-form Unicode text in Python, that contained the problem statement and low-level solution using Unicode character categories. PRECIS goes one step further by proposing specific string classes that represent typical usage scenarios involving processing of Unicode strings. PRECIS starts from just two use cases — string used as an identifier, that will be subsequently used in URIs and databases, where one of the most challenging problems is reliable comparison. For example, are “ŻÓBR” and “ŻÓBR” the same usernames, or group names? Visually they should be identical in most fonts and displays, and both could have been honestly typed by the same user using different keyboards, yet they are composed of different code points. First, using a non-combining keyboard: > import unicodedata > x='ŻÓBR' > for c in x: print(f'{c}: {unicodedata.name(c)}') Ż: LATIN CAPITAL LETTER Z WITH DOT ABOVE Ó: LATIN CAPITAL LETTER O WITH ACUTE B: LATIN CAPITAL LETTER B R: LATIN CAPITAL LETTER R Second, using letters followed by combining accents: > x='Z\u0307O\u0301BR' > x 'ZOBR' > for c in x: print(f'{c}: {unicodedata.name(c)}') Z: LATIN CAPITAL LETTER Z : COMBINING DOT ABOVE O: LATIN CAPITAL LETTER O : COMBINING ACUTE ACCENT B: LATIN CAPITAL LETTER B R: LATIN CAPITAL LETTER R Usual byte-by-byte comparison will fail, and if you’re not careful your application will allow creation of visually identical usernames that are assigned distinct user objects. In my previous article (Input validation of free-form Unicode text in Python) I suggested using Unicode normalisation to always convert these homoglyphic forms into a single, consistent one. PRECIS The two string classes proposed by PRECIS are IdentifierClass and FreeformClass, and their purpose is quite self-describing. What sits inside them, is a carefully selected combination of character classes (such as letter, digits, spaces) that are allowed, others that are disallowed (e.g. funny text direction changing characters), additional contextual rules as well as policy towards characters that are yet unknown in the current version of Unicode. As you can guess, these rules for IdentifierClass are much more strings, while for FreeformClass they are much more lax and permissive. Not surprisingly, Unicode normalisation (specifically, NFC) is an important part of these transformations. On top of these basic string classes, you can build your own string profiles, that reflect your applications data objects more accurately. For example, one Python library precis-i18n implements UsernameCasePreseved (strict) and NicknameCasePreserved (lax). Here’s what happens when you try to pass my name through both of them. First, nickname profile, apparently indended to be displayed as the profile name but not used in identifiers: > import precis_i18n > precis_i18n.get_profile('NicknameCasePreserved').enforce('Paweł Krawczyk') 'Paweł Krawczyk' However, let’s tr
(read more)
Including an example of property-based testing without much partitioning. A tweet from Brian Marick induced me to read a paper by Dick Hamlet and Ross Taylor called Partition Testing Does Not Inspire Confidence. In general, I find the conclusion fairly intuitive, but on the other hand hardly an argument against property-based testing. I'll later return to why I find the conclusion intuitive, but first, I'd like to address the implied connection between partition testing and property-based testing. I'll also show a detailed example. The source code used in this article is available on GitHub. Not the same # The Hamlet & Taylor paper is exclusively concerned with partition testing, which makes sense, since it's from 1990. As far as I'm aware, property-based testing wasn't invented until later. Brian Marick extends its conclusions to property-based testing: "I've been a grump about property-based testing because I bought into the conclusion of Hamlet&Taylor's 1990 "Partition testing does not inspire confidence"" This seems to imply that property-based testing isn't effective, because (if you accept the paper's conclusions) partition testing isn't effective. There's certainly overlap between partition testing and property-based testing, but it's not complete. Some property-based testing isn't partition testing, or the other way around: To be fair, the overlap may easily be larger than the figure implies, but you can certainly describes properties without having to partition a function's domain. In fact, the canonical example of property-based testing (that reversing a list twice yields the original list: reverse (reverse xs) == xs) does not rely on partitioning. It works for all finite lists. You may think that this is only because the case is so simple, but that's not the case. You can also avoid partitioning on the slightly more complex problem presented by the Diamond kata. In fact, the domain for that problem is so small that you don't need a property-based framework. You may argue that the Diamond kata is another toy problem, but I've also solved a realistic, complex business problem with property-based testing without relying on partitioning. Granted, the property shown in that article doesn't sample uniformly from the entire domain of the System Under Test, but the property (there's only one) doesn't rely on partitioning. Instead, it relies on incremental tightening of preconditions to tease out the desired behaviour. I'll show another example next. FizzBuzz via partitioning # When introducing equivalence classes and property-based testing in workshops, I sometimes use the FizzBuzz kata as an example. When I do this, I first introduce the concept of equivalence classes and then proceed to explain how instead of manually picking values from each partition, you can randomly sample from them: [<Property(QuietOnSuccess = true)>] let ``FizzBuzz.transform returns Buzz`` (number : int) =     (number % 5 = 0 && number % 3 <> 0) ==> lazy     let actual = FizzBuzz.transform number     let e
(read more)
Tilck (Tiny Linux-Compatible Kernel) Contents Overview What is Tilck? Future plans What Tilck is NOT ? Features Hardware support File systems Processes and signals I/O Console Userspace applications Screenshots Booting Tilck Tilck's bootloader 3rd-party bootloaders Grub support Documentation and HOWTOs Building Tilck Testing Tilck Debugging Tilck Tilck's debug panel A comment about user experience FAQ Why Tilck does not have the feature/abstraction XYZ? Why Tilck runs only on x86 (ia-32)? Why having support for FAT32? Why keeping the initrd mounted? Why using 3 spaces as indentation? Why many commit messages are so short? Overview What is
(read more)
I just discovered a lurking problem in the timebase.c module in all of the branches for releases >=3.20: In gpsd_gpstime_resolv(): /* sanity check week number, GPS epoch, against leap seconds * Does not work well with regressions because the leap_sconds * could be from the receiver, or from BUILD_LEAPSECONDS. */ if (0 < session->context->leap_seconds && 19 > session->context->leap_seconds && 2180 < week) { /* assume leap second = 19 by 31 Dec 2022 * so week > 2180 is way in the future, do not allow it */ week -= 1024; GPSD_LOG(LOG_WARN, &session->context->errout, "GPS week confusion. Adjusted week %u for lea
(read more)
August 01, 2021 The recent release of PetitPotam by @topotam77 motivated me to get back to Windows RPC fuzzing. On this occasion, I thought it would be cool to write a blog post explaining how one can get into this security research area. RPC as a Fuzzing Target? As you know, RPC stands for “Remote Procedure Call”, and it isn’t a Windows specific concept. The first implementations of RPC were made on UNIX systems in the eighties. This allowed machines to communicate with each other on a network, and it was even “used as the basis for Network File System (NFS)” (source: Wikipedia). The RPC implementation developed by Microsoft and used on Windows is DCE/RPC, which i
(read more)
The Myth of RAM, part I # The Myth of RAM, part I April 21, 2014 ## Preface This article is the first of four in a series, in which I argue that thinking of a memory access as _O(1)_ is generally a bad idea, and we should instead think of them as taking _O(√N)_ time. In part one I lay out a hand-wavy argument based on a benchmark. In [part II](2014_04_28_myth_of_ram_2.html) I build up a mathematical argument based in theoretical physics, and in [part III](2014_04_29_myth_of_ram_3.html) I investigate some implications. [Part IV](2015_02_09_myth_of_ram_4.html) is a FAQ in which I answers some common questions and misunderstandings. (This preface was added on August 29, 2016) ## Intr
(read more)
Introduction There are a ton of great explainers of what graph neural networks are. However, I find that a lot of them go pretty deep into the math pretty quickly. Yet, we still are faced with that age-old problem: where are all the pics?? As such, just as I had attempted with Bayesian deep learning, I'd like to try to demystify graph deep learning as
(read more)
Jason A. Donenfeld Jason at zx2c4.com Mon Aug 2 17:27:37 UTC 2021 Previous message (by thread): wireguard command line, dumplog and GUI pop-up in 0.3.16 Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] Hey everyone, After many months of work, Simon and I are pleased to announce the WireGuardNT project, a native port of WireGuard to the Windows kernel. This has been a monumental undertaking, and if you've noticed that I haven't read emails in about two months, now you know why. WireGuardNT, lower-cased as "wireguard-nt" like the other r
(read more)
Your log data is a treasure-trove of information about your application, but it can be overwhelming. This post will dig into several strategies for extracting metrics and other helpful information from your logs. We’ll start with the basics of the heroku logs command, then we’ll dig into the real fun using a tool called Angle Grinder. How to view your Heroku logs heroku logs on its own just prints the most recent logs from your app and then exits. Generally that’s not very useful. I almost always want the -t (or --tail) option to continually tail my logs. Additionally I usually want it scoped to a specific dyno process, so I’ll include -d router, -d web, or -d worker so I’m only seeing relevant logs. Here’s how I would tail my router logs: heroku logs -t -d router 2021-07-28T16:23:07.870849+00:00 heroku[router]: at=info method=POST path="/api/REDACTED/v2/reports?dyno=web.8&pid=4" host=api.railsautoscale.com request_id=0ce66277-877c-4d4f-91c4-2c1075089b41 fwd="3.84.54.241,172.70.34.122" dyno=web.7 connect=1ms service=156ms status=204 bytes=358 protocol=https 2021-07-28T16:23:07.774247+00:00 heroku[router]: at=info method=POST path="/api/REDACTED/v2/reports?dyno=web.3&pid=81" host=api.railsautoscale.com request_id=fe46b69d-8938-4d41-a566-4c837050f6da fwd="3.85.98.203,172.69.62.61" dyno=web.14 connect=1ms service=14ms status=204 bytes=358 protocol=https 2021-07-28T16:23:07.627308+00:00 heroku[router]: at=info method=POST path="/api/REDACTED/v2/reports?dyno=web.1&pid=11" host=api.railsautoscale.com request_id=f5b69be4-8283-48f6-b683-30c051b4f51d fwd="34.232.107.232,172.70.42.94" dyno=web.11 connect=0ms service=327ms status=204 bytes=358 protocol=https 2021-07-28T16:23:07.740752+00:00 heroku[router]: at=info method=POST path="/api/REDACTED/v2/reports?dyno=web.4&pid=4" host=api.railsautoscale.com request_id=89d2da0d-e1d8-484d-99f2-bf26921ba9a5 fwd="3.249.54.29,162.158.158.123" dyno=web.11 connect=0ms service=354ms status=204 bytes=358 protocol=https 2021-07-28T16:23:07.881220+00:00 heroku[router]: at=info method=POST path="/api/REDACTED/v2/reports?dyno=web.1&pid=24539" host=api.railsautoscale.com request_id=37171239-4524-46cf-9dce-7cdda8bd4ace fwd="3.95.
(read more)
A mysterious, one-letter npm package named "-" sitting on the registry since 2020 has received over 700,000 downloads. What's more? The package contains no functional code, so what makes it score so many downloads? Inside the npm package "-" An npm package called "-" has scored almost 720,000 downloads since its publication on the npm registry, since early 2020. There's only one version of the package: 0.0.1 and it contains three files: tar tvf 0.0.1/--0.0.1.tgz package/dist/index.js package/package.json package/README.md Inside these files—mainly the manifest (package.json) and index.js, there is nothing phenomenally interesting, just skeleton code. The manifest does pull in a bunch of development dependencies (devDependencies) and invokes some commands on the "ts-node" component, but that's about it. It's practically dead code, for now: The index.js file and the manifest file (package.json) of "-" (BleepingComputer) "-" is used by over 50 packages It gets even better. The practically useless package "-" serves as a dependency for over 50 npm packages, without a clear explanation: npm package "-" is used by 56 packages (npmjs.org) But most of these dependencies have no more than a few dozen weekly downloads. So, how is it that "-" has scored almost 720,000 down
(read more)
31 Jul 2021 The computers sitting on our desks are incomprehensibly fast. They can perform more operations in one second than a human could in one hundred years. We live in an era of CPUs that can perform billions of instructions per second, tens of billions if we take multi-cores into account, of memory that can transfer data to the CPU at hundreds of gigabytes per second, of disks that support streaming reads of gigabytes per second. This era of incredibly fast hardware is also the era of programs that take tens of seconds to start from an SSD or NVMe disk; of bloated web applications that take many seconds to show a simple list, even on a broadband connection; of programs that process data at a thousandth of the speed we should expect. Software is laggy and sluggish — and the situation shows little signs of improvement. Why is that? I believe that the main reason is that most of us who started programming after, say the year 2000, have never learned how to make reasonable use of the computational resources at our disposal. In fact, most of our training has taught us to ignore the computer! Although our job is ostensibly to create programs that let users do stuff with their computers, we place a greater emphasis on the development process and dev-oriented concerns than on the final user product. SICP contains a quote that I find to be a good summarization of the problem: “programs must be written for people to read, and only incidentally for machines to execute.” Many programmers find that quote wise and inspiring, but users are not interested in reading programs, they’re interested in executing them, fast. We can’t make programs that run fast if we write them in a way that is only “incidentally” executable. The computer is not an implementation detail that can be abstracted away and ignored — it’s an integral part of the solution. A program that makes no room for the target machine in its design will inevitably run slower than one that does. A common argument against taking the computer into account during the design phase of a program is “premature optimzation is the root of all evil.” Topics such as cache-friendlin
(read more)
tl;dr: When multiple apps interact with the same database, nasty side-effects can happen: One app keeps the database busy; all other apps might stop responding. In this case, you are dealing with an incident that is difficult to debug due to a non-obvious root cause. Assigning a name to each database connection can make a difference. It will reduce the time to debug by multiple hours and finding the root cause faster. From the perspective of the database, you can differentiate the apps and their commands to identify the bad client.➡️ Want to see how it works? Checkout examples for MongoDB, MySQL, PostgreSQL, redis, and non-database systems like RabbitMQ or HTTP.Why does naming your database connection make sense?Many of the applications on this planet interact with some kind of database. In a perfect world:Every application has its own databaseSeveral applications do not share the same databaseDirect access to the stored data is shielded by an application via an APIThe thing is: We don’t live in a perfect world. The reality often is:Several applications share one or multiple databasesThese applications are developed independentlyand receive different types of traffic patternsThis may lead to a situation where one application requests many compute resources from the database by inefficient queries. At the same time, other applications might suffer from unexpected behavior or a partial outage due to limited resources available on the database to serve the requests.A perfect world where everyone has its own database vs. Reality where databases are shared.This situation is typically hard to debug because the root cause is not that obvious: Application B is failing because the database cannot answer in time due to Application A sending expensive queries.In my last eight years working mainly on the scalability and reliability of trivago, I have seen many outages that contained similar things like:blocked and unresponsive Redis instancesblocked database tables due to inefficient queriesoverloaded database servers due to raising traffic and query volumesservices that receive a large number of HTTP requests from unknown sourcesIn all cases, identifying the client r
(read more)
A recurring questions that surfaces around the Future of Coding Community is what happened to OpenDoc? why did it fail? This post is a summary of reasons found around the web, then I will explore other implementations similar to OpenDoc to see if there is a general pattern. Bias warning: I pick the quotes and the emphasis, read the sources in full to form your own conclusion and let me know! OpenDoc To start, here's a brief description of what OpenDoc was: The OpenDoc concept was that developers could just write the one piece they were best at, then let end-users mix and match all of the little pieces of functionality together as they wished. Let's find out the reasons: OpenDoc post by Greg Maletic A consortium, lots of money and the main driver being competing against Microsoft:
(read more)
Quoting Wikipedia on the classic social science text, "Exit, Voice, and Loyalty": The basic concept is as follows: members of an organization, whether a business, a nation or any other form of human grouping, have essentially two possible responses when they perceive that the organization is demonstrating a decrease in quality or benefit to the member: they can exit (withdraw from the relationship); or, they can voice (attempt to repair or improve the relationship through communication of the complaint, grievance or proposal for change). For example, the citizens of a country may respond to increasing political repression in two ways: emigrate or protest. Similarly, employees can choose to quit their unpleasant job, or express their concerns in an effort to improve
(read more)
Over the last few years, I've worked on open-source distributed systems in Go at Google. As a result, I've thought a lot about dependency management, systems configuration, programming languages, and compilers.Again and again, I saw the same fundamental data structure underpinning these technologies: the directed acyclic graph. The most frustrating part was modeling graph-based configuration in languages that optimized for hierarchical data structures. That's why I created Virgo.Virgo is a graph-based configuration language. It has two main features: edge definitions and vertex definitions. The vgo configuration file then parses into an adjacency list. You can achieve similar results by adding additional conventions and restrictions on YAML or JSON. Much like YAML optimized for human readability, Virgo optimizes natural graph readability, editability, and representation. // config.vgo a -> b, c, d -> e <- f, gA graphical representation of the Virgo graphVirgo is open to proposals and language changes. Please open up an issue to start a discussion at https://github.com/r2d4/virgo.Graphs are everywhere in configuration management. One graph that engineers may be familiar with is the
(read more)
Machine Learning Operations (MLOps) has come to be an important push for enterprises in 2021 and beyond – and there are clear reasons why this paradigm shift in Enterprise AI is upon us. Most enterprises who have begun data science and machine learning programs over the last several years have had difficulties putting even their promising machine learning models and proof of concept exercises into action, by deploying them meaningfully in production environments. I use the term “meaningfully” here, because the nuances around deployment make all the difference and form the soul of the subject matter around MLOps. In this post, I wish to discuss what ails enterprise AI today, sources of the gaps between production and proof-of-concept, expectations from MLOps implementations and the current state of the discourse on MLOps. Note and Acknowledgement: I have also discussed several ideas and patterns I've seen from experiences I've had in the industry, not necessarily in one company or job, but going back all the way to projects and programs I've been in over the last seven to ten years. I don't mention clients or employers here as a matter of principle, but I would like to acknowledge mentors and clients for their time and energy and occasionally their guidance as well, in the synthesis of some of these ideas. It is a more boundaryless world than before, and great conversations are to be had regardless of one's location. I find a lot of the content and conversations regarding data science on Twitter and LinkedIn quite illuminating - and together with work and clients, the twain have constituted a great environment in which to discuss and develop ideas. What
(read more)
In an effort to take advantage of an old Rodenstock newspaper enlargement lens that was only being used as a paperweight, photographer Tim Hamilton has constructed an enormous “ultra-large-format” projection camera that he has used to capture unique photos and videos. Hamilton says that the reason he built the device was to make use of the old enlargement lens that he had in his possession. “Before I got the lens, it was being used as a paperweight, and the old photojournalists who worked at the newspaper before the digital transition were saddened by that. So someone handed it to me,” he says. “They are fairly rare and expensive lenses and it’s been begging to be made into a camera.” Hamilton says that the lens he has is a Rodenstock 600mm f/9 APO-Ronar, which in good condition can fetch north of $900. He also says that, if he chooses, he can stop the lens down to f/255. He got the idea to turn it into a giant camera from a story he read on PetaPixel. “I’ll admit it’s not my idea and I saw somebody else do this on Petapixel with an 8×10,” he says. Hamilton references the work of Ukranian photographer Olexiy Shportun, who created a digital camera system that makes photos that resemble large format film but uses a modern mirrorless camera. Inspired, Hamilton decided to use apply his Rodenstock to the same concept. Only his camera box would have to be much, much larger. “Because the projection distance was so small, the focal plane from the
(read more)
Background Note: I’ve started writing this article about one year ago (September 2020), but I dropped it at some point. Its final version is way less ambitious than my original plans for it, mostly because I forgot some of things that were on mind back then. Still, better than nothing. A long time ago (in 2011) I wrote about my frustrations with Linux that led me to abandon the OS after having spent quite a lot of time on it. After this article I made one failed attempt to convert to Windows and eventually I settled on macOS for almost a decade. While I was reasonably happy with macO
(read more)
%PDF-1.2 %���� 2 0 obj << /Length 4723 /Filter /FlateDecode >> stream H��W�n��}�W  n�����^mv��`�"��鑸�^4�|}NU�6��1$��Y]]u�ԩv����7�fwxǛ�D. �ٸ( R��_���Wo~��6�wū;������mku��]�����w��_��x�kz�����m��m$��l� �`�G��д}~_Ve�Lf���|[����۪Oe�HFRu#���ܱ�7?Y�:�q�� ��Z8v�N���,'��=��?t�>�6��b�ql`���W�T�óx�OC�_�F�>w�?v���Ǻ|�m'�û�
(read more)
Resources Download Source CodeSummary # Terminal rails new template --skip-javascript bin/rails g scaffold products name color "price:decimal{8,2}" sku bundle add faker bundle add hotwire-rails bin/rails hotwire:install# db/seeds.rb 100.times do Product.create( name: Faker::Lorem.word, color: Faker::Color.hex_color, price: Faker::Commerce.price, sku: Faker::Number.number(10) ) end# views/products/index.html.erb <% @products.each do |product| %> <%= content_tag :tr, id: dom_id(product) do %> <%= product.name %> <%= product.color %> <%= product.price %>
(read more)
2 What are you doing this week? ☶ ask programming authored by caius 1 hour ago | 2 comments What are you doing this week? Feel free to share! Keep in mind it’s OK to do nothing at all, too. caius 1 hour ago | link Holiday. 😄 Bl
(read more)
This post is a summary of content from papers covering the topic, it's mostly quotes from the papers from 1983, 1993 and 1997 with some edition, references to the present and future depend on the paper but should be easy to deduce. See the Sources section at the end. Introduction In 1981, the emergence of the government-industry project in Japan known as Fifth Generation Computer Systems (FGCS) was unexpected and dramatic. The Ministry of International Trade and Industry (MITI) and some of its scientists at Electrotechnical Laboratory (ETL) planned a project of remarkable scope, projecting both technical daring and major impact upon the economy and society. This project captured the imagination of the Japanese people (e.g. a book in Japanese by Junichiro Uemae recounting its birth was titl
(read more)
In the [previous post I talked about how to generate input strings from any given context-free grammar. While that algorithm is quite useful for fuzzing, one of the problems with that algorithm is that the strings produced from that grammar is skewed toward shallow strings. For example, consider this grammar: Important: Pyodide takes time to initialize. Initialization completion is indicated by a red border around Run all button. To generate inputs, let us load the limit fuzzer from the previous post. The Fuzzer The generated strings (which generate random integers) are as follows As you can see, there are more single digits in the output than longer integers. Almost half of the generated strings are single character
(read more)
I got bit by the TypeScript bug hard a few years ago. Having the compiler to lean on has made me much more productive and confident in my work.But I know a lot of people don’t feel this way. We don’t always get autonomy in what tools we use, so sometimes people are forced onto TypeScript against their preferences. And people can often get stuck feeling like they have to fight against the compiler, or at least continually contend with its nagging.Most people understand that TypeScript adjusts some tradeoffs in the effort we apply to a project. In normal JS, you can get something up and running quickly, but may have to spend much more time addressing bugs or edge cases. Where with TS, you put in more work up front, and have to do less fiddling with small things, as the compiler will catc
(read more)
This is a fascinating question. The other answers here are all speculative, and in some cases flat-out incorrect. Instead of writing my opinion here, I actually did some research and found original sources that discuss why delete and put are not part of the HTML5 form standard. As it turns out, these methods were included in several, early HTML5 drafts (!), but were later removed in the subsequent drafts. Mozilla had actually implemented this in a Firefox beta, too. What was the rationale for removing these methods from the draft? The W3C discussed this topic in bug report 10671. Mike Amundsen argued in favor of this support: Executing PUT and DELETE to modify resources on the origin server is straight-forward for modern Web browsers using the XmlHttpRequest object. For unscripted browse
(read more)
Although the main interface between applications and a Vitess database is through the MySQL protocol, Vitess is a large and complex distributed system, and all the communication between the different services in a Vitess cluster is performed through GRPC.Because of this, all service boundaries and messages between Vitess' systems are specified using Protocol Buffers. The history of Vitess' integration with Protocol Buffers is rather involved: We have been using and keeping up to date with the Go Protocol Buffers package since its earliest releases, up until May last year, when Google released a new Go API for Protocol Buffers, which is not backwards compatible with the previous Go package.There are several reasons why we didn’t jump at the chance of upgrading to the new API right away: t
(read more)
How to write really slow Rust code How I tried to port Lisp code to Rust and managed to get a much slower program... and how to fix that! Written on 31 Jul 2021, 10:50 AM Photo by Sam Moqadam on Unsplash I have recently published a blog post that, as I had expected (actually, hoped for, as that would attract people to contribute to the “study”), generated quite some polemic on the Internet! The post was about an old study by Lutz Prechelt comparing Java to C/C++, as well as a few follow-up papers that added other languages to the comparison, including Common Lisp and a few scripting languages. I decided to try and see if the results in those papers, which ran their studies 21 years ago, still stand or if things changed completely since then. I couldn’t get a bunch of student
(read more)
Dekel Entrepreneur. R&D consultant. Geek. Aug 1 ・5 min read Intro & Background If you have some experience with React, you probably came across styled-components. In the last few years, the concept of css-in-js became more popular, and there are multiple libraries that are available for us to use. styled-components is one of them, but you can also find Emotion, Radium, JSS, and more. In this post I'm not going to cover the pros a
(read more)
Executive Summary On July 9th, 2021 a wiper attack paralyzed the Iranian train system. The attackers taunted the Iranian government as hacked displays instructed passengers to direct their complaints to the phone number of the Iranian Supreme Leader Khamenei’s office. SentinelLabs researchers were able to reconstruct the majority of the attack chain, which includes an interesting never-before-seen wiper. OPSEC mistakes let us know that the attackers refer to this wiper as ‘Meteor’, prompting us to name the campaign MeteorExpress. At this time, we have not been able to tie this activity to a previously identified threat group nor to additional attacks. However, the artifacts suggest that this wiper was developed in the p
(read more)
Posted 09 Feb 2020 | Share: Entity-Component-System (ECS) is a type of game architecture that focuses on composing entities with data only components, and processing logic separately in systems. Though, while working on my own little game engine, I noticed that a lot of the methods presented for implementing ECS frameworks are not trivial. Often using this type of architecture people become obsessed with speed and efficiency, and don’t get me wrong, this is a goal. But it shouldn’t be your primary goal, especially making small games. In trying to get the best performance you often end up making something overcomplicated, which just isn’t going to make your life easier. This frustrated me, I like simple solutions, partly because they’
(read more)
Have you ever noticed that Jira (and most if not all) SWE work tracking systems allow assigning only one person to a given task? The whole industry (at least where I've seen it) runs around the assumption that at the bottom “one task == one person”. The more I think about it through the years, the more confident I am that it's a very unproductive thing to do, and we should default to two people working at the same time on a given task. In complex domains, sometimes potentially even three. Probably at some point you have experienced how quickly things can get done, especially in case of emergencies (like “OMG, something is wrong in production”) where multiple people with different skills and knowledge jump together into one “room”? I've recently read The Goal, which I highly
(read more)
Should standalone web components be written in vanilla JavaScript? Or is it okay if they use (or even bundle) their own framework? With Vue 3 announcing built-in support for building web components, and with frameworks like Svelte and Lit having offered this functionality for some time, it seems like a good time to revisit the question. First off, I should state my own bias. When I released emoji-picker-element, I made the decision to bundle its framework (Svelte) directly into the component. Clearly I don’t think this is a bad idea (despite my reputation as a perf guy!), so I’d like to explain why it doesn’t shock me for a web component to rely on a framework. Size concerns Many web developers might bristle at the idea of a standalone web component relying on its own framework.
(read more)
06/07/2021 5 minutes to read d In this article .NET Multi-platform App UI (MAUI) is a cross-platform framework for creating native mobile and desktop apps with C# and XAML. Using .NET MAUI, you can develop apps that can run on Android, iOS, macOS, and Windows from a single shared code-base. .NET MAUI is open-source and is the evolution of Xamarin.Forms, extended from mobile to desktop scenarios, with UI controls rebuilt from the ground up for performance and extensibility. If you've previously used Xamarin.Forms to build cross-platform user interfaces, you'll notice many similarities with .NET MAUI. However, there are also some differences. Using .NET MAUI, you can create multi-platform apps using a single project, but you can add platform-specific source code and resources if necessary. One of the key aims of .NET MAUI is to enable you to implement as much of your app logic and UI layout as possible in a single code-base. Who .NET MAUI is for .NET MAUI is for developers who want to: Write cross-platform apps in XAML and C#, from a single shared code-base in Visual Studio. Share UI layout and design across platforms. Share code, test, and business logic across platforms. How .NET MAUI works .NET MAUI unifies Android, iOS, macOS, and Windows APIs into a single API that allows a write-once run-anywhere developer experience, while additionally providing dee
(read more)
Why it matters: The previous record holder for the highest fine received for GDPR violations was Google who received a €50 million penalty. However, Amazon was recently fined an eye-watering €746 million, signaling that violating privacy rules in the EU is getting a lot more expensive as time goes by. Amazon seems to be doing relatively well under its new leadership, but the company's growth is slowing down and the shortcuts taken to achieve its gargantuan size are biting again. The retail giant has been fined a whopping €746 million ($885 million) after Luxembourg's National Data Protection Commission (CNPD) found the company had violated GDPR rules when processing personal data. The Wall Street Journal spotted the fine in a security filing, where the company disclosed that it was issued two weeks ago after the CNPD concluded an investigation into Amazon's advertising practices. Amazon noted in the filing the CNPD asked it to revise its advertising practices, but the company didn't reveal any details about the proposed changes. Either way, Amazon isn't happy about the fine, and believes "the decision relating to how we show customers relevant advertising relies on subjective and untested interpretations of European privacy law." The company plans to appeal the decision in court, and argues the proposed fine is "entirely out of proportion." GDPR rules allow for the penalty to be €20 million or 4 percent of a company's annual global revenue, whichever is higher. Back in June, the Wall Street Journal saw a CNPD draft where the fine was set at $425 million, but that amount more than doubled after other EU privacy regulat
(read more)
The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community. The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company. (This article is a repost from my personal blog at https://marccgk.github.io) Much has been written lately about C++, the direction the language is taking and how most of what gets called “modern C++” is just a no-go zone for game developers. Although I fully agree with the sentiment, I tend to look at C++ evolution as the effect of a pervasive set of ideas that dominate the minds of most developers. In this post I’ll try to put some of my thoughts on these ideas in order and, hopefully, something coherent will come up. Even though C++ is described as a multi-paradigm programming language, the truth is that most programmers will use C++ exclusively as an Object Oriented language (generic programming will be used to “augment” OOP). Even beyond C++, newer languages have been invented implementing Object Oriented Programming as a first class citizen with more features that the ones present in C++ (e.g. C#, Java). OOP is supposed to be a tool, one of multiple paradigms that programmers can use to solve problems by writing code. However, in my experience, OOP is taken as the gold standard for software development by the majority of professionals. As such, developing solutions starts by deciding which objects are needed. The actual problem solving starts after there’s an object or objects that will host the code. When that’s the thought process, OOP is not a tool, but the entire toolset. The way I visualize an OOP solution is like a constellation of stars: a group of objects with arbitrary links between them. This is not different from looking at it as a graph, where objects are the nodes and the relationships, the edges, but I like the notion of group/cluster that constellation conveys (vs the abstract meaning of graph). What worries me, though, is how this “constellation of objects” happens to be. As I see it, these constellations
(read more)
"When I see a door with a push sign, I pull first to avoid conflicts" - anonymous For those that work with git for some time, it is not often that you get to discover new things about it. That is if you exclude the plumbing commands which probably most of us don't know by heart and most likely that's for the better. To my surprise, I recently found out about 2 new additions to the list of high-level commands: git restore git switch To understand why they came to be, let's first visit our old friend git checkout. git checkout is one of the many reasons why newcomers find git confusing. And that is because its effect is context-dependent. The way most people use it is to switch the active branch in their local repo. More exactly, to switch the branch to which HEAD points. For example, you can switch to the develop branch if you are on the main branch: git checkout develop You can also make your HEAD pointer reference a specific commit instead of a branch(reaching the so-called detached HEAD state): git checkout f8c540805b7e16753c65619ca3d7514178353f39 Where things get tricky is that if you provide a file as an argument instead of a branch or commit, it w
(read more)
Today we are happy to announce axum: An easy to use, yet powerful, web framework designed to take full advantage of the Tokio ecosystem.Route requests to handlers with a macro free API.Declaratively parse requests using extractors.Simple and predictable error handling model.Generate responses with minimal boilerplate.Take full advantage of the tower and tower-http ecosystem of middleware, services, and utilities.In particular the last point is what sets axum apart from existing frameworks. axum doesn't have its own middleware system but instead uses tower::Service. This means axum gets timeouts, tracing, compression, authorization, and more, for free. It also enables you to share middleware with applications written using hyper or tonic.The "hello world" of axum looks like this:use axum::prelude::*; use std::net::SocketAddr; #[tokio::main] async fn main() { let app = route("/", get(root)); let addr = SocketAddr::from(([127, 0, 0, 1], 3000)); hyper::Server::bind(&addr) .serve(app.into_make_service()) .await .unwrap(); } async fn root() -> &'static str { "Hello, World!" }This will respond to GET / with a 200 OK response where the body is Hello, World!. Any other requests will result in a 404 Not Found response.Requests can be parsed declaratively using "extractors". An extractor is a type that implements FromRequest. Extractors can be used as arguments to handlers and will run if the request URI matches.For example, Json is an extractor that consumes the request body and parses it as JSON:use axum::{prelude::*, extract::Json}; use serde::Deserialize; #[derive(Deserialize)] struct CreateUser { username: String, } async fn create_user(Json(payload): Json<CreateUser>) { } let app = route("/users", post(create_user));axum ships with many useful extractors such as:Bytes, String, Body, and BodyStream for consuming the request body.Method, HeaderMap, and Uri for getting specific parts of the request.Form, Query, UrlParams, and UrlParamsMap for more high level request parsing.Extension for sharing state across handlers.Request if you want full control.Result<T, E> and Option<T> to make an extractor optional.You can al
(read more)