While working on my master’s thesis, I investigated some recently proposed constructions that turn AEADs into key-committing or fully-committing ones. Key commitment has recently gotten a lot more attention and I, therefore, expect this post to be outdated quite soon, as new research emerges. This post serves as a quick collection of personal notes and pointers, that maybe could help someone looking to add key commitment to their AEAD schemes today. There exist constructions proposed in earlier work, but the ones covered herein are the ones I focused on primarily. An implementation of UtC+HtE and CTX for ChaCha20-Poly1305 with BLAKE2b is available here: https://github.com/brycx/CAEAD Key-committing or fully committing? The first question you need to figure out is whether you only want a key-committing AE or fully committing one. If an AEAD is key-committing, it means that it commits to the input (K, N, C). That is the key, nonce, and ciphertext. A fully committing AE will commit to the entire input, meaning the AD as well: (K, N, AD, C). If you deal with protocols where you need message franking, you would for example require a fully-committing AEAD. Both constructions mentioned in this post are generic, meaning they can add commitment on top of any AEAD scheme and not just, say, AES-GCM. UtC, RtC and HtE by Bellare and Hoang These transformations have been described in the 2022 paper “Efficient Schemes for Committing Authenticated Encryption” by Mihir Bellare and Viet Tung Hoang. They define two generic constructions that turn either a nonce-based AEAD (nAEAD) into a key-committing scheme or a misuse-resistant AEAD (MRAE) into a key-committing scheme. UtC (UNAE-then-Commit) UtC adds key commitment to any nonce-based AEAD scheme. It uses what Bellare and Hoang call a committing PRF (F), to derive a commitment block P and subkey L from the key and nonce. The subkey L is what is used as the key for the underlying AEAD and P is appended to the ciphertext. (P, L) ← F(K, N) C ← AEAD(L, N, A, M) C ← P || C Bellare and Hoang propose a specific instantiation of a committing PRF based on AES, in their paper. RtC (MRAE-
(read more)
Almost exactly one year ago I wrote the first commit for rules_xcodeproj. Like a lot of software engineers, I’m pretty bad at estimating, and thought that I would be able to finish 1.0 in 2 to 4 months 😅. The longer development cycle was a result of an increased scope and level of quality that I came to expect for a proper 1.0 release. Over the course of the year, I believe the project has risen to meet my expectations, and today I’m happy to announce the release of version 1.0 of rules_xcodeproj!The road to 1.0​The road to 1.0 has been an incredible journey. Early in the development cycle Spotify, Robinhood, and Slack engineers became adopters and contributors; without their help I wouldn’t be writing this blog post today 🙏. JP became a vocal champion of rules_xcodeproj after integrating it with the SwiftLint and Envoy Mobile projects. During BazelCon 2022 the project got a couple shout-outs, including during Erik’s wonderful talk. And I’m also incredibly grateful th
(read more)
Nothing compares more to the sense of power UNIX sysadmin experiences when being able to print from a command line on its UNIX system :p I kinda omitted this topic (printing) for quite a lot of time – when I was using FreeBSD in the corporate environment I still printed from Windows VM on a network printers. Then they forced me to use Windows anyway. At home my wife always had a printer configured (as she uses it more) and the other printer also had USB port – so you could just copy the PDF or JPG file to a USB pendrive – attach it the printer and hit print button for the selected files. No configuration needed. I was also disappointed when I tried several years ago to configure USB printer on FreeBSD … and failed. Recently I though that its about fucking time to dig into that topic and have at least one working printer on FreeBSD. This guide will focus on using two printers with CUPS on FreeBSD: HP Color LaserJet 200 M251nw Printer (attached over TCP/IP network) Samsung Black/White ML-1915 Printer (local USB attached) There will be two different prompt types used for the commands: starting with % for commands that can be executed as regular user or root starting with # for commands that must be executed as root user The Table of Contents for this article is shown below. CUPS Packages and Service Configuration Network Printer – HP M251nw Try to Print Some Document USB Printer – Samsung ML-1915 Choose Default Printer CUPS Printers Config Command Line Printing Last Chance Fancy Pants Summary There are only three pkg(8) packages needed for my printers – these are: # pkg install cups cups-filters splix We will also need to add some lines to the /etc/devfs.rules file. These lines are important for printing with CUPS: add path 'lpt*' mode 0660 group cups add path 'ulpt*' mode 0660 group cups add path 'unlpt*' mode 0660 group cups The rest of the config is just the rest of my desktop config and can be omitted for printing. The entire /etc/devfs.rules file looks as follows. % cat /etc/devfs.rules [desktop=10] add path 'lpt*' mode 0660 group cups add path 'ulpt*' mode 0660 group cups add path 'unlpt*' mode 0660 group cups add path 'acd*' mode 0660 group operator add path 'cd*' mode 0660 group operator add path 'da*' mode 0660 group operator add path 'pass*' mode 0660 group operator add path 'xpt*' mode 0660 group operator add path 'fd*' mode 0660 group operator add path 'md*' mode 0660 group operator add path 'uscanner*' mode 0660 group operator add path 'ugen*' mode 0660 group operator add path 'usb/*' mode 0660 group operator add path 'video*' mode 0660 group operator add path 'cuse*' mode 0660 group operator We will also need to add devfs_system_ruleset=desktop to the /etc/rc.conf file. % grep desktop /etc/rc.conf devfs_system_ruleset=desktop Now we need to restart the devfs daemon to read new config. # service devfs restart We can also make sure that devfs(8) know our ruleset config. # devfs rule -s 10 show | column -t 100 path acd* group operator mode 660 200 path cd* group operator mode 660 300 path da* group operator mode 660 400 path pass* group operator mode 660 500 path xpt* group operator mode 660 600 path fd* group operator mode 660 700 path md* group operator mode 660 800 path uscanner* group operator mode 660 900 path lpt* group cups mode 660 1000 path ulpt* group cups mode 660 1100 path unlpt* group cups mode 660 1200 path ugen* group operator mode 660 1300 path usb/* group operator mode 660 1400 path video* group operator mode 660 1500 path cuse* group operator mode 660 The column(1) is not needed here – I used it only to format the output. What amaze me to this day that column(1) command is still not available on such enterprise (and overpriced also) IBM AIX system 🙂 Here are the contents of fresh CUPS installation at /usr/local/etc/cups dir. # tree -F --dirsfirst /usr/local/etc/cups /usr/local/etc/cups ├── ppd/ ├── ssl/ ├── cups-files.conf ├── cups-files.conf.sample ├── cupsd.conf ├── cupsd.conf.sample ├── snmp.conf └── snmp.conf.sample 3 directories, 6 files You will need to add cupsd_enable=YES to the /etc/rc.conf file. % grep cups /etc/rc.conf cupsd_enable=YES Make sure that cupsd service is started and running. # service cupsd start Starting cupsd. # service cupsd status cupsd is running as pid 44515. # sockstat -l4 | grep -e ADDRESS -e 631 USER COMMAND PID FD PROTO LOCAL ADDRESS FOREIGN ADDRESS root cupsd 44515 6 tcp4 127.0.0.1:631 *:* Just in case – here are the groups in which my vermaden user is: % id | tr ',' '\n' uid=1000(vermaden) gid=1000(vermaden) groups=1000(vermaden) 0(wheel) 5(operator) 44(video) 69(network) 145(webcamd) 920(vboxusers) It was not needed to add my vermaden user to the cups group to print – but feel free to also test that if you face any problems. First I will go with the TCP/IP attached network printer – HP M251nw. Before doing any steps or configuration on FreeBSD part we first need to connect that printer to the TCP/IP network. As the HP M251nw printer has WiFi – I decided to connect it to my wireless WiFi router instead of using RJ45 cable. I will not document that part as HP already provides decent guide on how to achieve that – https://youtu.be/jLDzQBAtKyQ – on YouTube service. In my case I used the 10.0.0.9 IP address and I configured my WiFi router to always attach that MAC address to that IP address. Next step is to open http://localhost:631/ page in your browser. You will see default CUPS web interface. Hit the Administration tab on the top. Then click the Add Printer button in the middle of the page – you will be asked for username and password – use your username and your password here. The HP M251nw network attached browser has already been detected by CUPS. Select it and click Continue button. CUPS will suggest some long names and description as showed below. … but we will use simpler and shorter name instead. Next we need to choose which driver to use. We will not find a HP M251nw driver on the CUPS list but there are two drivers that will work here: HP LaserJet Series PCL 6 CUPS (en) HP Color LaserJet Series PCL 6 CUPS (en) As HP M251nw is color printer we will choose HP Color LaserJet Series PCL 6 CUPS here. After a moment we will see a message that HP M251nw printer has been successfully added to CUPS. You can notice that new PPD file appeared at CUPS dir named exactly like the printer name. % ls -l /usr/local/etc/cups/ppd total 9K -rw-r----- 1 root cups 9721 2023-02-06 11:24 HP-M251nw.ppd -rw-r----- 1 root cups 9736 2023-02-06 11:23 HP-M251nw.ppd.O This is how our HP M251nw printer status page looks like. We should now setup the default printing options. From the Administration drop down menu select Set Default Options option. The only things I selected/set that are different from the CUPS defaults are A4 paper size and 1200 DPI resolution. I will now use Atril PDF viewer to test how the printing on the HP M251nw works – I used a small one page PDF file with one of my old guides – the ZFS Madness one from 2014. From the File menu select Print… option – or just hit [CTRL]+[P] shortcut. Then select HP-M251nw printer from the list and hit the Print button below. After some noises and time (not much later) the printer dropped a printed page. Seems to work properly. Looks good. Lets now add USB printer. To get needed PPD driver for the Samsung ML-1915 printer we installed the print/splix package. Here is the exact driver we will use. % pkg info -l splix | grep 1915 /usr/local/share/cups/model/samsung/ml1915.ppd Before attaching the Samsung ML-1915 printer to your computer you may check what devices devd(8) will create. First power on the Samsung ML-1915 printer. Then attach the USB cable from the printer to your FreeBSD box (assuming that printer has AC p
(read more)
Sync’ up! … without getting drained feb 6 It’s imperative If you’re writing code in an imperative language like C or Python, there’s one over-arching heuristic that I think all such hackers should try to follow: don’t write whopper routines. What are whopper routines? Well, if you don’t know, maybe you are subjecting the world to such source-code. Whopper routines are functions, routines, methods, that run on and on, and try to do everything all right there. These routines kind of even look like a whopper (burger), as all the conditional branching is falling out everywhere, like onions and pickles hanging outside of the sandwich. A whopper just does too much. It doesn’t share any of the logic with smaller routines that could more effectively specialize in some of the parts. But this isn’t all about aesthetics; there’s a prime reason why one should avoid whoppers. Unit tests When you create whopper routines, the amount of setup needed to test such code is daunting. Say, for instance, you have a ‘main’ Python routine that calls an external API, digests the data, converts it to something meaningful for your application, and writes to a file. To test this whopper, in a unit test, say, you’d almost need to create the world over in order to make it all work. However, if your ‘main’ routine called a dozen-or-so smaller and more specialized routines, then it would be infinitely easier to write unit tests for these little individual functions. You could have a unit test for the function that converts the API data (you just need to cobble together some binary/plaintext to pass into it); You could have a unit test for the little function that writes to disc, even. With these minion routines, unit tests are a snap to write. And for imperative languages, a tested codebase is a strategy that’s wise to have when code failure isn’t an option.
(read more)
Event TimelineThere are a very large number of changes, so older changes are hidden. Show Older Changeslibcxx/include/__algorithm/sort.h 128libcxx/include/__algorithm/sort.h 187–191libcxx/benchmarks/algorithms.bench.cpp 27–28 libcxx/include/__algorithm/sort.h 152 160 187–191 libcxx/test/libcxx/algorithms/robust_against_copying_comparators.pass.cpp 192Comment Actionslibcxx/benchmarks/algorithms.bench.cpp 27–28 libcxx/test/libcxx/algorithms/robust_against_copying_comparators.pass.cpp 192libcxx/include/__algorithm/sort.h 127libcxx/include/__algorithm/sort.h 127libcxx/benchmarks/algorithms
(read more)
Announcing Rust Magazine 2022-12-10 VecDeque::resize() optimization 2022-12-24 MiniLSM: A Tutorial of Building Storage Engine in a Week using Rust 2022-12-30
(read more)
%PDF-1.4 %���� 299 0 obj <> endobj xref 299 15 0000000016 00000 n 0000001221 00000 n 0000001495 00000 n 0000001756 00000 n 0000002264 00000 n 0000002876 00000 n 0000002912 00000 n 0000003161 00000 n 0000003404 00000 n 0000003481 00000 n 0000005660 00000 n 0000006166 00000 n 0000006422 00000 n 0000001041 00000 n 0000000607 00000 n trailer <<8F6ECC9B26DE9A4EB5CC48BA519C69AA>]>> startxref 0 %%EOF 313 0 obj<>stream x�b```b``Qe`a``lb�[email protected]~V da���*b� ���c�E�E�η:q������ (�%��J�DpOX3#S�d��K���Tv�Im������t<*�!Թ�X�ԋO�)Pj��ķ�A���l��8�����u��9�M�g���ÁL��l=I@c8��;�,%�����aLK��X<@d�� �� Q���@����� ���X,�����Q�ԟ�lêϷ�\d CAL� ��6��,� o�8!&�30�iY�[email protected]ڟ���;8���؇�Q?H330�-���u�� #��1�ȑ@� `8C\� endstream endobj 312 0 obj<>/Size 299/Type/XRef>>stream x�bb�a`b``Ń3� ��^� endstream endobj 300 0 obj<>/Outlines 47 0 R/Metadata 84 0 R/PieceInfo<>>>/Pages 81 0 R/PageLayout/OneColumn/StructTreeRoot 86 0 R/Type/Catalog/LastModified(D:20060807082827)/PageLabels 79 0 R>> endobj 301 0 obj<>/Font<>/ProcSet[/PDF/Text]/ExtGState<>>>/Type/Page>> endobj 302 0 obj<> endobj 303 0 obj<> endobj 304 0 obj[/ICCBased 311 0 R] endobj 305 0 obj<> endobj 306 0 obj<> endobj 307 0 obj<> endobj 308 0 obj<>stream H��Wko����_1 �L�C�zl�&��z[���0&G�$2G�!�����;ç"u��L��}�{Ν���o���Y�~������l;�L���_�qm'�ד��:f��7�0��8�e�ګ#�ek�xL÷�a|��1��|e)[ij(����$x���l���Ş�C�oLT��I��ݿ~`k\<�ԯ*�Lo�t�u�{ {�Y�W3�g3�.&��ʍ>�Kk����a*���3�<�ll��ˑ���Y�J�$�N�ؗ���t�i����F��Q�}Q�|��~���/럯�;�����,ᔧ�j��U�X��? Q��Z����g}޳3/��� �y��a��q��J�"��8}2���O�EK��P�(cFN�{P�{��(I�����Hg��2uW���§i[ ���*0�Um�k<hd�����ace̍ �������x��u���V�n�vp�A4��u��
(read more)
We’ve just uploaded mypy 1.0 to the Python Package Index (PyPI). Mypy is a static type checker for Python. This release includes new features, performance improvements and bug fixes. You can install it as follows: python3 -m pip install -U mypy You can read the full documentation for this release on Read the Docs. New Release Versioning Scheme Now that mypy reached 1.0, we’ll switch to a new versioning scheme. Mypy version numbers will be of form x.y.z. Rules: The major release number (x) is incremented if a feature release includes a significant backward incompatible change that affects a significant fraction of users. The minor release number (y) is incremented on each feature release. Minor releases include updated stdlib stubs from typeshed. The point release number (z) is incremented when there are fixes only. Mypy doesn't use SemVer, since most minor releases have at least minor backward incompatible changes in typeshed, at the very least. Also, many type checking features find new legitimate issues in code. These are not considered backward incompatible changes, unless the number of new errors is very high. Any significant backward incompatible change must be announced in the blog post for the previous feature release, before making the change. The previous release must also provide a flag to explicitly enable or disable the new behavior (whenever practical), so that users will be able to prepare for the changes and report issues. We should keep the feature flag for at least a few releases after we've switched the default. See ”Release Process” in the mypy wiki for more details and for the most up-to-date version of the versioning scheme. Performance Improvements Mypy 1.0 is up to 40% faster than mypy 0.991 when type checking the Dropbox internal codebase. We also set up a daily job to measure the performance of the most recent development version of mypy to make it easier to track changes in performance. Many optimizations contributed to this improvement: Improve performance for errors on class with many attributes (Shantanu, PR 14379) Speed up make_simplified_union (Jukka Lehtosalo, PR 14370) Micro-optimize get_proper_type(s) (Jukka Lehtosalo, PR 14369) Micro-optimize flatten_nested_unions (Jukka Lehtosalo, PR 14368) Some semantic analyzer micro-optimizations (Jukka Lehtosalo, PR 14367) A few miscellaneous micro-optimizations (Jukka Lehtosalo, PR 14366) Optimization: Avoid a few uses of contextmanagers in semantic analyzer (Jukka Lehtosalo, PR 14360) Optimization: Enable always defined attributes in Type subclasses (Jukka Lehtosalo, PR 14356) Optimization: Remove expensive context manager in type analyzer (Jukka Lehtosalo, PR 14357) subtypes: fast path for Union/Union subtype check (Hugues, PR 14277) Micro-optimization: avoid Bogus[int] types that cause needless boxing (Jukka Lehtosalo, PR 14354) Avoid slow error message logic if errors not shown to user (Jukka Lehtosalo, PR 14336) Speed up the implementation of hasattr() checks (Jukka Lehtosalo, PR 14333) Avoid the use of a context manager in hot code path (Jukka Lehtosalo, PR 14331) Change various type queries into faster bool type queries (Jukka
(read more)
We propose a new package providing structured logging with levels. Structured logging adds key-value pairs to a human-readable output message to enable fast, accurate processing of large amounts of log data. See the design doc for details. fsouza, zephyrtronium, r5sec5cyl, willfaught, AndrewHarrisSPU, carlmjohnson, gilcrest, wdvxdr1123, komuw, rverton, and 138 more reacted with thumbs up emoji tdakkota, mrwormhole, aea7, 08d2, nahwinrajan, zxysilent, salehmu, and blissd reacted with thumbs down emoji septemhill, fsouza, smlx, r5sec5cyl, gilcrest, hnakamur, bytheway, collinforsyth, mikeschinkel, cypres, and 12 more reacted with hooray emoji ericlagergren, IAmSurajBobade, gjkim42, ainar-g, tonyhb, xremming, wzshiming, mkungla, vediatoni, and mosmartin reacted with eyes emoji This comment has been hidden. This comment has been hidden. This comment has been hidden. This is a huge API surface without any real production testing (AIUI). Perhaps it might be better to land it under golang.org/x for some time? Eg, like context, xerrors changes. aarzilli, fsouza, flibustenet, prochac, deefdragon, icholy, xdorro, komuw, BerkeleyTrue, ainar-g, and 8 more reacted with thumbs up emoji ysomad, dbirks, lanrat, bigwhite, BerkeleyTrue, EwenQuim, fess932, jochumdev, cristaloleg, shadyabhi, and 13 more reacted with hooray emoji This comment has been hidden. I love most of what this does, but I don't support its addition as it stands. Specifically, I have issues with the option to use inline key-value pairs in the log calls. I believe the attributes system alone is fine. Logging does not need the breakage that key-value args like that allow. The complexity in the documentation around Log should be a warning sign. ... The attribute arguments are processed as follows: If an argument is an Attr, it is used as is. If an argument is a string and this is not the last argument, the following argument is treated as the value a
(read more)
Here is a copy of the MIT license. One of the well-known open source licenses. It is, effectively, the only license that I’ve used for software I wrote or contributed in the last 10 years: Copyright Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permissio
(read more)
In software we express our ideas through tools.  In data, those tools think in rectangles.  From spreadsheets to the data warehouses, to do any analytical calculation, you must first go through a re
(read more)
My iPhone SE 2016 is a good one. It’s easy to handle with one hand. The screen is more than decent. For most things the camera is good enough. It has a headphone jack. Good battery life. The Touch ID works faster than Face ID.I dropped it on the pavement a few times. No problem, only the casing has a few scratches and dents. To you it may look like a piece of junk now, but to me it’s like that proverbial old pair of jeans.Now you may think that it’s slow as mud in a pond. But, contrary to my experience with previous models, the iPhone SE actually got faster with iOS updates! It’s 2023 and this is still a really good phone. None of the currently supported iPhones is as small a
(read more)
With Twitter being a mess at the moment, I decided to try out Mastodon1 as an alternative. Mastodon is a federated social media platform, built on top of a protocol called ActivityPub2. It can be
(read more)
Protocol Buffers are a popular choice for serializing structured data due to their compact size, fast processing speed, language independence, and compatibility. There exist other alternatives, including Cap’n Proto, CBOR, and Avro. Usually, data structures are described in a proto definition file (.proto). The protoc compiler and a language-specific plugin convert it into code: $ head flow-4.proto syntax = "proto3"; package decoder; option go_package = "akvorado/inlet/flow/decoder"; message FlowMessagev4 { uint64 TimeReceived = 2; uint32 SequenceNum = 3; uint64 SamplingRate = 4; uint32 FlowDirection = 5; $ protoc -I=. --plugin=protoc-gen-go --go_out=module=akvorado:. flow-4.proto $ head inlet/flow/decoder/flow-4.pb.go // Code generated by protoc-gen-go. DO NOT EDIT. // versions: // protoc-gen-go v1.28.0 // protoc v3.21.12 // source: inlet/flow/data/schemas/flow-4.proto package decoder import ( protoreflect "google.golang.org/protobuf/reflect/protoreflect" Akvorado collects network flows using IPFIX or sFlow, decodes them with GoFlow2, encodes them to Protocol Buffers, and sends them to Kafka to be stored in a ClickHouse database. Collecting a new field, such as source and destination MAC addresses, requires modifications in multiple places, including the proto definition file and the ClickHouse migration code. Moreover, the cost is paid by all users.1 It would be nice to have an application-wide schema and let users enable or disable the fields they need. While the main goal is flexibility, we do not want to sacrifice performance. On this front, this is quite a success: when upgrading from 1.6.4 to 1.7.1, the decoding and encoding performance almost doubled! 🤗 goos: linux goarch: amd64 pkg: akvorado/inlet/flow cpu: AMD Ryzen 5 5600X 6-Core Processor │ initial.txt │ final.txt │ │ sec/op │ sec/op vs base │ Netflow/with_encoding-12 12.963µ ± 2% 7.836µ ± 1% -39.55% (p=0.000 n=10) Sflow/with_encoding-12 19.37µ ± 1% 10.15µ ± 2% -47.63% (p=0.000 n=10) I use the following co
(read more)
Release Highlights DBENGINE v2 The new open-source database engine for Netdata Agents, offering huge performance, scalability and stability improvements, with a fraction of memory footprint! FUNCTION: Processes Netdata beyond metrics! We added the ability for runtime functions, that can be implemented by any data collection plugin, to offer unlimited visibility to anything, even not-metrics, that can be valuable while troubleshooting. Events Feed Centralized view of Space and Infrastructure level events about topology changes and alerts. Integrations New and improved plugins for data collection, alert notifications, and data exporters. Collectors Notifications Exporters Visualizations / Charts and Dashboards Database Streaming and Replication API Machine Learning Installation and Packaging Documentation and Demos Administration Other Notable Changes Deprecation notice Netdata Agent release meetup Support options Acknowledgements ❗We are keeping our codebase healthy by removing features that are end-of-life. Read the deprecation notice to check if you are affected. Netdata open-source growth Almost 62,000 GitHub Stars Over four million monitored servers Almost 88 million sessions served Over 600 thousand total nodes in Netda
(read more)
Eaton • Feb 6, 2023 Key Points / Summary: I hacked Toyota’s Global Supplier Preparation Information Management System (“GSPIMS”), a web app used by Toyota employees and their suppliers to coordinate projects, parts, surveys, purchases, and other tasks related to the global Toyota supply chain. System Admin access achieved through accidentally introduced backdoor as part of a user impersonation/”Act As” feature. Any user could be logged in to just by knowing their email, completely bypassing the various corporate login flows. Read/write access to the global user directory containing 14k+ users was achieved. Data access achieved: 14k+ corporate user account details, confidential documents, projects, supplier rankings/comments, and more. Data access was global and not limited to North America. Issue was responsibly disclosed to Toyota in November 2022 and fixed in a timely manner. Over the course of a slow week in late October 2022, I decided to explore the subdomains of various major companies to see if I could find any exploits worth reporting/writing about. I found several interesting Toyota websites. In 7 days, I reported 4 different security issues to Toyota, all of which were classified as “critical”. One of the reports had a remarkably severe impact and is one of the most severe vulnerabilities I have ever found (so far!) I discovered what was essentially a backdoor login mechanism in the Toyota GSPIMS website/application that allowed me to log in as any corporate Toyota user or supplier just by knowing their email. I eventually uncovered a system administrator email and was able to log in to their account. Once that was done, I had full control over the entire global system. I used the word “staggering” to describe the amount of data I had access to in the Jacuzzi SmartTub hack, but that was relatively minor compared to this. I had full access to internal Toyota projects, documents, and user accounts, including user accounts of Toyota’s external partners/suppliers. External accounts include users from: Michelin Continental Stanley Bl
(read more)
Facebook for iOS (FBiOS) is the oldest mobile codebase at Meta. Since the app was rewritten in 2012, it has been worked on by thousands of engineers and shipped to billions of users, and it can support hundreds of engineers iterating on it at a time. After years of iteration, the Facebook codebase does not resemble a typical iOS codebase: It’s full of C++, Objective-C(++), and Swift. It has dozens of dynamically loaded libraries (dylibs), and so many classes that they can’t be loaded into Xcode at once. There is almost zero raw usage of Apple’s SDK — everything has been wrapped or replaced by an in-house abstraction. The app makes heavy use of code generation, spurred by Buck, our custom build system. Without heavy caching from our build system, engineers would have to spend an entire workday waiting for the app to build. FBiOS was never intentionally architected this way. The app’s codebase reflects 10 years of evolution, spurred by technical decisions necessary to support the growing number of engineers working on the app, its stability, and, above all, the user experience. Now, to celebrate the codebase’s 10-year anniversary, we’re shedding some light on the technical decisions behind this evolution, as well as their historical context. 2014: Establishing our own mobile frameworks Two years after Meta launched the native rewrite of the Facebook app, News Feed’s codebase began to have reliability issues. At the time, News Feed’s data models were backed by Apple’s default framework for managing data models: Core Data. Objects in Core Data are mutable, and that did not lend itself well to News Feed’s multithreaded architecture. To make matters worse, News Feed utilized bidirectional data flow, stemming from its use of Apple’s de facto design pattern for Cocoa apps: Model View Controller. Ultimately, this design exacerbated the creation of nondeterministic code that was very difficult to debug or reproduce bugs. It was clear that this architecture was not sustainable and it was time to rethink it. While considering new designs, one engineer investigated React, Facebook’s (open source) UI framework, which was becoming quite popular
(read more)
Small embedded cores have little area to spare for security features and yet must often run code written in unsafe languages and, increasingly, are exposed to the hostile Internet. CHERIoT  (Capability Hardware Extension to RISC-V for Internet of Things) builds on top of CHERI and RISC-V to provide an ISA and software model that lets software depend on object-granularity spatial memory safety, deterministic use-after-free protection, and lightweight compartmentalization exposed directly to the C/C++ language model. This can run existing embedded software components on a clean-slate RTOS that scales up to large numbers of isolated (yet securely communicating) compartments, even on systems with under 256 KiB of SRAM. This technical report is accompanied by three open source releases
(read more)
Note: Following blog post more or less applies to any dynamically typed programming language, e.g. Ruby. I am only sharing my experience and frustrations with Python, cos that's the language I use.I started learning to program with Python, which holds a special place in my heart. It’s the language which taught me how to think about programming, modelling a problem to code and communicate with the machine. I was hired at my current job (or the previous one) because of Python. I was a fanboy and evangelist. I spent a lot of time in Python communities, learning and helping others. When I started mentoring beginners to code, Python was my choice of language.I distinctly remember having the following conversation with a friend:Friend: C# makes life a lot easier with typesMe: I haven’t had t
(read more)
Netflix wants to chop down your family tree: The enshittification cycle comes for your most private domain. Hey look at this: Delights to delectate. This day in history: 2008, 2013, 2018, 2022 Colophon: Recent publications, upcoming/recent appearances, current writing projects, current reading Netflix has unveiled the details of its new anti-password-sharing policy, detailing a suite of complex gymnastics that customers will be expected to undergo if their living arrangements trigger Netflix's automated enforcement mechanisms: https://thestreamable.com/news/confirmed-netflix-unveils-first-details-of-new-anti-password-sharing-measures If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog: https://pluralistic.net/2023/02/02/nonbinary-families/#red-envelopes Netflix says that its new policy allows members of the same "household" to share an account. This policy comes with an assumption: that there is a commonly understood, universal meaning of "household," and that software can determine who is and is not a member of your household. This is a very old co
(read more)
UPM is our internal standalone library to perform static analysis of SQL code and enhance SQL authoring.  UPM takes SQL code as input and represents it as a data structure called a semantic tree. Infrastructure teams at Meta leverage UPM to build SQL linters, catch user mistakes in SQL code, and perform data lineage analysis at scale. Executing SQL queries against our data warehouse is important to the workflows of many engineers and data scientists at Meta for analytics and monitoring use cases, either as part of recurring data pipelines or for ad-hoc data exploration.  While SQL is extremely powerful and very popular among our engineers, we’ve also faced some challenges over the years, namely:  A need for static analysis capabilities: In a growing number of use cases at Meta, we must understand programmatically what happens in SQL queries before they are executed against our query engines — a task called static analysis.  These use cases range from performance linters (suggesting query optimizations that query engines cannot perform automatically) and analyzing data lineage (tracing how data flows from one table to another). This was hard for us to do for two reasons: First, while query engines internally have some capabilities to analyze a SQL query in order to execute it, this query analysis component is typically deeply embedded inside the query engine’s code. It is not easy to extend upon, and it is not intended for consumption by other infrastructure teams. In addition to this, each query engine has its own analysis logic, specific to its own SQL dialect; as a result, a team who wants to build a piece of analysis for SQL queries would have to reimp
(read more)
What is binary to text encoding?What a better place to start than Wikipedia's article on binary to text encoding:A binary-to-text encoding is encoding of data in plain text.[A bit recursive perhaps?]More precisely, it is an encoding of binary data in a sequence of printable characters.These encodings are necessary for transmission of data when the channel does not allow binary data (such as email or NNTP) or is not 8-bit clean.PGP documentation (RFC 4880) uses the term "ASCII armor" for binary-to-text encoding when referring to Base64.OK, I've split their introduction into separate phrases. Setting aside the first phrase which is somewhat recursive, the second one is the technical correct definition, emphasizing the need to transform arbitrary data (nowadays arbitrary byte sequences) into printable data (nowadays most likely ASCII character sequences).The other two sentences (mentioning "8-bit clean" or "PGP") are perhaps better suited for a section dedicated to the early computing history (in case of "8-bit cleanness") or the "good ideas that have failed to meet the market" history (in case of PGP)...As a small example, below are the first 440 bytes of MBR partitioned disks, which is actually binary machine executable code part of the early (legacy) boot stages, as provided by the SYSLINUX project.base64 < /usr/share/syslinux/mbr.bin M8D6jtiO0LwAfInmBleOwPv8vwAGuQAB86XqHwYAAFJStEG7qlUxyTD2+c0TchOB+1WqdQ3R6XMJ ZscGjQa0QusVWrQIzROD4T9RD7bGQPfhUlBmMcBmmehmAOg1AU1pc3Npbmcgb3Blcm
(read more)
Dunno if the macOS keychain counts. I think it would count, however I’ve read recently two articles by Matthew Garrett about the topic and it doesn’t seem encouraging: https://mjg59.dreamwidth.org/64968.html https://mjg59.dreamwidth.org/65462.html However I’m searching for something portable (at least in the UNIX / POSIX world). About the GnuPG situation, although I am an active (long time) user of GnuPG, and I’ve even hacked together a password manager (similar to pass) with GnuPG, I really hate the overall experience (engineering and UX)… I’ve looked forward to age,
(read more)
When you’re in the business of selling software to people, you tend to get a few chances to migrate data from their legacy software to your shiny new system. Most recently for me that has involved public health data exported from legacy disease surveillance systems into PostgreSQL databases for use by the open source EpiTrax system and its companion EMSA. We have collected a few tips that may help you learn from our successes, as well as our mistakesparticularly educational experiences. Customer Management Your job is to satisfy your customers, and your customers
(read more)
Breaking news: OpenTTD 13.0 is now available! Depending on your perspective, we’re either two months early for our usual April 1st release, or a bit tardy for the Christmas 2022 release we intended. We think the wait has been worth it. This is one of the largest releases we’ve done in several years, with numerous features and improvements covering the user interface, gameplay features, and modding extensions for NewGRF and Game Script creators. Some of the highlights are: Variable interface scaling at whatever size you want (not just 2x and 4x), with optional chunky
(read more)
2023-02-05CPUs running Intel’s Skylake-X microarchitecture have a curious bug that I haven’t seen mentioned anywhere: the AVX-512 compression instructions have a false dependency on their destination. In other words, the following two instructions have identical performance characteristics:vcompressps X{k}, Y vcompressps X{k}{z}, YWhereas we would expect the latter to depend only on k and Y, it also depends on X. The problem seems to have been fixed in Icelake. Surprisingly, while it affects all compression operations, it does not affect any of the expansion operations. Presumably, this is related to the odd behaviour of compression with a memory destination; expansion can’t target memory.One thing I have never understood is why compression and expansion operations pu
(read more)
02 Feb 2023 When I see the landscape of native GUI in 2022, I feel like something is missing. I don’t just mean Rust UI. My frustrations with UI frameworks started long before I’d even heard of Rust. The origin story: Qt and fear The Qt framework is a C++ toolkit for writing GUI apps. In 2019, I spent a year working on a Qt project for an energy company, a diagram editor meant to be used by electrical engineers. I was brought in as a consultant late in the project, to fix bugs and add small features before the product shipped. I had already worked on Qt projects before, but they were mostly amateur projects, toy examples and the like. This was the first GUI project of industrial scale that I worked on, and it was fascinating and/or infuriat
(read more)
Abstract As the Web is becoming ubiquitous, interactive, and multimodal, technology needs to deal increasingly with human factors, including emotions. The specification of Emotion Markup Language 1.0 aims to strike a balance between practical applicability and scientific well-foundedness. The language is conceived as a "plug-in" language suitable for use in three different areas: (1) manual annotation of data; (2) automatic recognition of emotion-related states from user behavior; and (3) generation of emotion-related system behavior. Status of this document This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in
(read more)
It's a world of laughter a world of song it's a world where pointers are four bytes long There's only so much cache we must clean up our trash It's a smol world after all! It's a world of strings and arrays for you dictionaries and ints can be found here too Everything's quite compact there's nothing that we lack it's a smol, smol world! do { It's a smol world after all... It's a smol world after all... It's a smol world after all... It's a smol, smol world } while(!insane); What? smol world is an experimental memory manager and object model, which tries to optimize for small data size. It provides: A memory space called a “heap”, which internally uses 32-bit pointers A super fast “bump” or “arena” memory allocator Allocated blocks with only
(read more)
Introduction Smalltalk has suffered because it lacked a testing culture. This column describes a simple testing strategy and a framework to support it. The testing strategy and framework are not intended to be complete solutions, but rather a starting point from which industrial strength tools and procedures can be constructed. The paper is divided into three sections: Philosophy - Describes the philosophy of writing and running tests embodied by the framework. Read this section for general background. Cookbook - A simple pattern system for writing your own tests. Framework - A literate program version of the testing framework. Read this for in-depth knowledge of how the framework operates. Example - An example of using the testing frame
(read more)
The Market for Lemons For most of the past decade, I have spent a considerable fraction of my professional life consulting with teams building on the web. It is not going well. Not only are new services being built to a self-defeatingly low UX and performance standard, existing experiences are pervasively re-developed on unspeakably slow, JS-taxed stacks. At a business level, this is a disaster, raising the question: "why are new teams buying into stacks that have failed so often before?" In other words, "why is this market so inefficient?" George Akerlof's most famous paper introduced economists to the idea that information asymmetries distort markets and reduce the quality of goods because sellers with more information ca
(read more)
What are you doing this week? Feel free to share! Keep in mind it’s OK to do nothing at all, too.
(read more)