log in
Since initially surfacing in August 2020, the creators of DARKSIDE ransomware and their affiliates have launched a global crime spree affecting organizations in more than 15 countries and multiple industry verticals. Like many of their peers, these actors conduct multifaceted extortion where data is both exfiltrated and encrypted in place, allowing them to demand payment for unlocking and the non-release of stolen data to exert more pressure on victims. The origins of these incidents are not monolithic. DARKSIDE ransomware operates as a ransomware-as-a-service (RaaS) wherein profit is shared between its owners and partners, or affiliates, who provide access to organizations and deploy the ransomware. Mandiant currently tracks multiple threat clusters that have deployed this ransomware, which is consistent with multiple affiliates using DARKSIDE. These clusters demonstrated varying levels of technical sophistication throughout intrusions. While the threat actors commonly relied on commercially available and legitimate tools to facilitate various stages of their operations, at least one of the threat clusters also employed a now patched zero-day vulnerability. Reporting on DARKSIDE has been available in advance of this blog post to users of Mandiant Advantage Free, a no-cost version of our threat intelligence platform. Targeting Mandiant has identified multiple DARKSIDE victims through our incident response engagements and from reports on the DARKSIDE blog. Most of the victim organizations were based in the United States and span across multiple sectors, including financial services, legal, manufacturing, professional services, retail, and technology. The number of publicly named victims on the DARKSIDE blog has increased overall since August 2020, with the exception of a significant dip in the number of victims named during January 2021 (Figure 1). It is plausible that the decline in January was due to threat actors using DARKSIDE taking a break during the holiday season. The overall growth in the number of victims demonstrates the increasing use of the DARKSIDE ransomware by multiple affiliates. Figure 1: Known DARKSIDE victims (August 2020 to April 2021) DARKSIDE Ransomware Service Beginning in November 2020, the Russian-speaking actor "darksupp" advertised DARKSIDE RaaS on the Russian-language forums exploit.in and xss.is. In April 2021, darksupp posted an update for the "Darkside 2.0" RaaS that included several new features and a description of the types of partners and services they were currently seeking (Table 1). Affiliates retain a percentage of the ransom fee from each victim. Based on forum advertisements, the RaaS operators take 25% for ransom fees less than $500,000, but this decreases to 10 percent for ransom fees greater than $5 million. In addition to providing builds of DARKSIDE ransomware, the operators of this service also maintain a blog accessible via TOR. The actors use this site to publicize victims in an attempt to pressure these organizations into paying for the non-release of stolen data. A recent update to their underground forum advertisement also indicates that actors may attempt to DDoS victim organizations. Th
(read more)
OpenZFS stands out in its snapshot design, providing powerful and easy-to-use tools for managing snapshots. Snapshots complement a backup strategy, as they are instantaneous and don’t require a backup window. Since snapshots are atomic, they are not affected by other processes and you don’t have to stop any running applications before taking a snapshot. What exactly is a snapshot? zfs(8) defines it as a “Read-only version of a file system … at a given point in time”. This is a powerful feature as there are many scenarios where it is convenient to access files from a certai
(read more)
Table of contents Overview Installation Usage Issues & Bug Reports Dependencies Contribution References Authors License Show Your Support Changelog Code of Conduct Overview Breathing gymnastics is a system of breathing exercises that focuses on the treatment of various diseases and general health promotion. Nafas is a collection of breathing gymnastics designed to reduce the exhaustion of long working hours. With multiple breathing patterns, Nafas helps you find your way to a detoxified energetic workday and also improves your concentration by increasing the oxygen level. No ne
(read more)
Notice: While Javascript is not essential for this website, your interaction with the content will be limited. Please turn Javascript on for the full experience. PEP:659 Title:Specializing Adaptive Interpreter Author:Mark Shannon Status:Active Type:Informational Created:13-Apr-2021 Post-History:11-May-2021 In order to perform well, virtual machines for dynamic languages must specialize the code that they execute to the types and values in the program being run. This specialization is often associated with "JIT" compilers, but is beneficial even without machine code generation. A specializing, adaptive interpreter is one that speculatively specializes on the types or values it is currently operating on, and adapts to changes in those types and values. Specialization gives us improved performance, and adaptation allows the interpreter to rapidly change when the pattern of usage in a program alters, limiting the amount of additional work caused by mis-specialization. This PEP proposes using a specializing, adaptive interpreter that specializes code aggressively, but over a very small region, and is able to adjust to mis-specialization rapidly and at low cost. Adding a specializing, adaptive interpreter to CPython will bring significant performance improvements. It is hard to come up with meaningful numbers, as it depends very much on the benchmarks and on work that has not yet happened. Extensive experimentation suggests speedups of up to 50%. Even if the speedup were only 25%, this would still be a worthwhile enhancement. Python is widely acknowledged as slow. Whilst Python will never attain the performance of low-level languages like C, Fortran, or even Java, we would like it to be competitive with fast implementations of scripting languages, like V8 for Javascript or luajit for lua. Specifically, we want to achieve these performance goals with CPython to benefit all users of Python including those unable to use PyPy or other alternative virtual machines. Achieving these performance goals is a long way off, and will require a lot of engineering effort, but we can make a significant step towards those goals by speeding up the interpreter. Both academic research and pr
(read more)
Archive-name: motorola/68k-chips-faq Posting-Frequency: monthly Last-modified: 1996/01/06 Version: 22 Frequently Asked Questions (FAQ) comp.sys.m68k This list is maintained by: Robert Boys San Jose, California formerly from Ontario, CANADA Email: [email protected] or [email protected] January 6, 1996 this is the 22th list =========================================================================== =========================================================================== = I am finally updating this FAQ ! I have been quite busy lately. = = = = I hope all of you reading this, your family and friends had a = = wonderful and peaceful Christmas and New Year holiday wherever = = you may happen to live in the world. I wish that all of you = = have a continuing prosperous and safe 1996. = = = = As you may have noticed in my header - I have moved from the land of = = ice and snow (Canada) to sunny California. = = I now work for Hitex Development Tools - aka HiTOOLS Inc. They sell = = emulators and such. Watch for me at tradeshows = = = = VMEbus, M68K and HC11 information may be sent to [email protected] = = = = I have a new Homepage: http://www2.best.com/~rboys (California) = = The latest version of this FAQ is stored there - i.e. the "work in = = process" version. I will be getting it running in the next few weeks. = = This is also true for the FAQ for comp.arch.bus.vmebus = = My backup Homepage is http://www.sentex.net/~rboys (Canada) = = = = This FAQ is also stored on: = = = = Canada - http://www.ee.ualberta.ca/archive/m68kfaq.html = = Germany - http://www.ba-karlsruhe.de/automation/FAQ/m68k = = California - http://www.hitex.com/automation/FAQ/m68k = = = = You can also retrieve the entire set of files (gifs and text) by = = pointing your Browser (Netscape 1.1n does this) at: = = = = http://www.ee.ualberta.ca/archive/m68kfaq.zip = = http://www.hitex.com/automation/Faq/m68kfaq.zip = =
(read more)
Qdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points (vectors with additional payload). It is tailored on extended support of filtering, which makes it useful for all sorts of neural-network or semantic based matching, faceted search, and other applications. Qdrant is written in Rust 🦀, which makes it reliable even under high load. API Online OpenAPI 3.0 documentation is available here. OpenAPI makes it easy to generate client for virtually any framework or programing language. You can also download raw OpenAPI definitions. Features Filtering Qdrant supports any combinations of should, must and must_not conditions, which makes it possible to use in applications when object could not be described solely by vector. It could be location features, availability flags, and other custom properties businesses should take into account. Write-ahead logging Once service confirmed an update - it won't lose data even in case of power shut down. All operations are stored in the update journal and the latest database state could be easily reconstructed at any moment. Stand-alone Qdrant does not rely on any external database or orchestration controller, which makes it very easy to configure. Usage Docker Build your own from source docker build . --tag=qdrant Or use latest pre-built image from DockerHub docker pull generall/qdrant To run container use command: docker run -p 6333:6333 \ -v $(pwd)/path/to/data:/qdrant/storage \ -v $(pwd)/path/to/custom_config.yaml:/qdrant/config/production.yaml \ qdrant /qdrant/storage - is a place where Qdrant persists all your data. Make sure to mount it as a volume, otherwise docker will drop it with the container. /qdrant/config/production.yaml - is the file with engine configuration. You can override any value from the reference config Now Qdrant should be accessible at localhost:6333 Examples This example covers the most basic use-case - collection creation and basic vector search. For additional information please refer to the documentation. Create collection First - let's create a collection with dot-production metric. curl -X POST 'http://localhost:6333/collections' \ -H 'Content-Type: application/json' \ --data-raw '{ "create_collection": { "name": "test_collection", "vector_size": 4, "distance": "Dot" } }' Expected response: { "result": true, "status": "ok", "time": 0.031095451 } We can ensure that collection was created: curl 'http://localhost:6333/collections/test_collection' Expected response: { "result": { "vectors_count": 0, "segments_count": 5, "disk_data_size": 0, "ram_data_size": 0, "config": { "vector_size": 4, "index": { "type": "plain", "options": {} }, "distance": "Dot", "storage_type": { "type": "in_memory" } } }, "status": "ok", "time": 2.1199e-05 } Add points Let's now add vectors with some payload: curl -L -X POST 'http://localhost:6333/collections/test_collection?wait=true' \ -H 'Content-T
(read more)
The page you have tried to view (A pair of memory-allocation improvements in 5.13) is currently available to LWN subscribers only. Reader subscriptions are a necessary way to fund the continued existence of LWN and the quality of its content. If you are already an LWN.net subscriber, please log in with the form below to read this content. Please consider subscribing to LWN. An LWN subscription provides numerous benefits, including access to restricted content and the warm feeling of knowing that you are helping to keep LWN alive. (Alterna
(read more)
2021-05-12 23:09 ToolchainsThis post is the AArch64 counterpart of my "Speedbuilding LLVM/Clang in 5 minutes" article. After publishing and sharing the previous post URL with some friends on IRC, I was asked if I wanted to try doing the same on a 160 cores ARM machine. Finding out what my answer was is left as an exercise to the reader :-) The system I'm using for this experiment is a BM.Standard.A1.160 bare-metal machine from Oracle Cloud, which has a dual-socket motherboard with two 80 cores Ampere Altra CPUs, for a total 160 cores, and 1024 GB of RAM. This is to the best of my knowledge the fastest AArch64 server machine available at this time. The system is running Oracle Linux
(read more)
Download PDF Abstract: The traditional Domain Name System (DNS) lacks fundamental features of security and privacy in its design. As concerns of privacy increased on the Internet, security and privacy enhancements of DNS have been actively investigated and deployed. Specially for user's privacy in DNS queries, several relay-based anonymization schemes have been recently introduced, however, they are vulnerable to the collusion of a relay with a full-service resolver, i.e., identities of users cannot be hidden to the resolver. This paper introduces a new concept of a multiple-relay-based DNS for user anonymity in DNS queries, called the mutualized
(read more)
One of the benefits of containers over virtual machines is that you get some measure of isolation without the performance overhead or distortion of virtualization. Docker images therefore seem like a good way to get a reproducible environment for measuring CPU performance of your code. There are, however, complications. Sometimes, running under Docker can actually slow down your code and distort your performance measurements. On macOS and Windows, for example, standard Linux-based Docker containers aren’t actually running directly on the OS, since the OS isn’t Linux. And the image filesystem from the container itself is typically mounted with some sort of overlay filesystem, whi
(read more)
Join the official community for Google Workspace administrators In the Cloud Connect Community, discuss the latest features with Googlers and other Google Workspace admins like you. Learn tips and tricks that will make your work and life easier. Be the first to know what's happening with Google Workspace. ______________ Learn about more Google Workspace launches On the “What’s new in Google Workspace?” Help Center page, learn about new products and features launching in Google Workspace, including smaller changes that haven’t been announced on the Google Workspace Updates blog. ______________
(read more)
18 November 2015 In order to defend and preserve the honor of the profession of computer programmers, I Promise that, to the best of my ability and judgement: I will not produce harmful code. The code that I produce will always be my best work. I will not knowingly allow code that is defective either in behavior or structure to accumulate. I will produce, with each release, a quick, sure, and repeatable proof that every element of the code works as it should. I will make frequent, small, releases so that I do not impede the progress of others. I will fearlessly and relentlessly improve my creations at every opportunit
(read more)
Today I’m going to share my perspective on how Ruby on Rails is developed and governed and how I feel the Basecamp “incident” impacts the future of Rails. I’m going to start out telling you what I know for sure, dip into some unknowns, and dive into some hypotheticals for fun. First off, who am I? Find me @schneems on Twitter and GitHub. I first contributed to Ruby on Rails ten years ago in 2011, and I’m in the top 50 contributors (by commits). I help maintain a few open source projects, including Puma and the Ruby Buildpack for Heroku (my day job). I’ve got 1,611,289,709 and counting gem downloads to my name. In the Ruby on Rails ecosystem, I am known as a “contributor”. In Rails speak, that means that I also have commit access to the project. Note “getting commit” means that the person can merge PRs, close issues, and (depending on configuration) push to the main branch. The Basecamp incident The reason I’m bringing this all up is that the Rails world has been reeling. DHH is not only the creator of Rails but also the Co-Founder of Basecamp. Basecamp has been in the news a lot after having ~1/3 of their employees quit. Many have also asked about Rail’s governance. To bring you up to speed on the loose timeline of the Basecamp news: Jason Fried made a highly criticized blog post DHH doubled down Then again Then Casey Newton broke the story of the events that lead to that seemingly-out-of-nowhere series of posts DHH Responded to the story The week ended with over 1/3 of Basecamp Employees quitting Then Casey posted again with some more details of an interaction with Ryan Singer the day most employees left It’s hard to fully capture the response of the Twitterverse to the original announcements and news. A few voices of opposition were louder on my timeline than others and I wanted to share them for additional context: Kim Crayton Apr 26 Twitter thread Apr 28 tweet Apr 30 tweet May 1 tweet and video link May 4 thread John Breen Blog post response Twitter thread following people leaving Basecamp Emily Pothast “T
(read more)
Soapbox is a social media server empowering communities online. Today we're releasing Soapbox BE v1.0 based on Pleroma! 🎉 Tue, May 11, 2021 Based on Pleroma Soapbox BE is a production ready Pleroma branch based on Pleroma 2.3 stable. It's being maintained alongside Pleroma, with additional bugfixes and features. Our goal is to move faster, while taking deliberate care to ensure clean code merges between projects. Soapbox BE contains code that has not yet been merged (or may never be merged) by Pleroma. A new foundation A big part of this release was just laying the groundwork to support another Fediverse backend: an updated website, an issue tracker, and proper documentation. We are now free to do things that weren't possible before. A full list of differences between Soapbox and Pleroma is documented here. Soapbox FE by default Soapbox FE is the default frontend of Soapbox BE. We think this is a good choice for growing the Fediverse, and it gives us more control over how the FE and BE interact. It will still be possible to switch to another frontend as usual. See the FAQ below for details. Rich media embeds Share a link to a popular video site, and users can watch right from their timeline! The following sites are tested to work: YouTube
(read more)
aiomixer, X/Open Curses and ncurses, and other news May 12, 2021 posted by Nia Alarie aiomixer is an application that I've been maintaining outside of NetBSD for a few years. It was available as a package, and was a "graphical" (curses, terminal-based) mixer for NetBSD's audio API, inspired by programs like alsamixer. For some time I've thought that it should be integrated into the NetBSD base system - it's small and simple, very useful, and many developers and users had it installed (some told me that they would install it on all of their machines that needed audio output). For my particular use case, as well as my NetBSD laptop, I have some small NetBSD machines around the house plugged into speakers that I play music from. Sometimes I like to SSH into them to adjust the playback volume, and it's often easier to do visually than with mixerctl(1). However, there was one problem: when I first wrote aiomixer 2 years ago, I was intimidated by the curses API, so opted to use the Curses Development Kit instead. This turned out to be a mistake, as not only was CDK inflexible for an application like aiomixer, it introduced a hard dependency on ncurses. X/Open Curses and ncurses Many people think ncurses is the canonical way to develop terminal-based applications for Unix, but it's actually an implementation of the X/Open Curses specification. There's a few other Curses implementations: NetBSD libcurses Solaris libcurses PDCurses, used on Windows NetBSD curses is descended from the original BSD curses, but contains many useful extensions from ncurses as well. We use it all over the base system, and for most packages in pkgsrc. It's also been ported to other operating systems, including Linux. As far as I'm aware, NetBSD is one of the last operating systems left that doesn't primarily depend on ncurses. There's one crucial incompatibility, however: ncurses exposes its internal data structures, NetBSD libcurses keeps them opaque. Since CDK development is very tied to ncurses development (they have the same maintainer), CDK peeks into those structures, and can't be used with NetBSD libcurses. There are also a few place
(read more)
Chances are, if you are writing Ruby code, you are using Sidekiq to handle background processing. If you are coming from ActiveJob or some other background, stay tuned, some of the tips covered can be applied there as well. Folks utilize (Sidekiq) background jobs for different cases. Some crunch numbers, some dispatch welcome emails to users, and some schedule data syncing. Whatever your case may be, you might eventually run into a requirement to avoid duplicate jobs. By duplicate jobs, I envision two jobs that do the exact same thing. Let’s dive in on that a bit. Why De-Duplicate Jobs? Imagine a scenario where your job looks like the following: 1 2 3 4 5 6 7 8 9 10 11class BookSalesWorker include Sidekiq::Worker def perform(book_id) crunch_some_numbers(book_id) upload_to_s3 end ... end The BookSalesWorker always does the same thing — queries the DB for a book based on the book_id and fetches the latest sales data to calculate some numbers. Then, it uploads them to a storage service. Keep in mind that every time a book is sold on your website, you will have this job enqueued. Now, what if you got 100 sales at once? You’d have 100 of these jobs doing the exact same thing. Maybe you are fine with that. You don’t care about S3 writes that much, and your queues aren’t as congested, so you can handle the load. But, “does it scale?"™️ Well, definitely not. If you start receiving more sales for more books, your queue will quickly pile up with unnecessary work. If you have 100 jobs that do the same thing for a single book, and you have 10 books selling in parallel, you are now 1000 jobs deep in your queue, where in reality, you could just have 10 jobs for each book. Now, let’s go through a couple of options on how you can prevent duplicate jobs from piling up your queues. 1. DIY Way If you are not a fan of external dependencies and complex logic, you can go ahead and add some custom solutions to your codebase. I created a sample repo to try out our examples first-hand. There will be a link in each approach to the example. 1.1 One Flag Approach You can add one flag that decides whether to enqueue a job or not. One might add a s
(read more)
© 2019 DjangoCon Photo of the final session of DjangoConEurope 2019 organized by DjangoDenmark in Copenhagen - CC BY-NC-SA TL;DR Over the past week, the Italian Django community has translated an important part of Django’s documentation into Italian, allowing for online publication and considerably increasing linguistic diversity in the community. Linguistic diversity¶ “If you talk to a man in a language he understands, that goes to his head. If you talk to him in his language, that goes to his heart.” — Nelson Mandela I have been dealing with FLOSS (Free Libre and Open Source software) for more than 20 years and I must admit that the community has grown a lot and the issues dealt with within it have improved. A very important issue that has become important lately is the
(read more)
Device tree in GUI. RISC-V port use new driver API and device manager. We demand a uname -a output in Terminal how about the About app? how much ram do this port need? X512 May 12, 2021, 8:30am #84 Bash may need to be recompiled because of broken statically linked glue code (crti, crtn).
(read more)
Austin Z. Henley Assistant Professor Home | Publications | Teaching | Blog | CV 5/11/2021 This post is an informal summary of our recent ICSE'21 Education idea paper, "An Inquisitive Code Editor for Addressing Novice Programmers’ Misconceptions of Program Behavior". Check out the preprint for more details. Special thanks to the NSF for funding this work. What if a code editor could detect that you have a potential misunderstanding of your code and help you overcome the misunderstanding? I'll be describing the approach and prototype we've been working on in the context of helping novice programmers learn how to code. Let's start with this snippet of code based on a student's homework submission: response = 0 while response != 'y' or response != 'n': response = input("Please enter (y)es or (n)o. \n") The program asks the user to input y for yes or n for no, and it will repeat until valid input is given. However, the loop will actually never end because of a mistake. The “or” should instead be an “and”. These types of errors are known as semantic errors. It compiles, but does not behave the
(read more)
In a recent article on clang-tidy I referenced the fact that we're doing a huge refactoring regarding char pointers, lifetime, ownership and std::strings. Todays post is another one related to that change, where even though everything compiled correctly, it didn't work. For a compiled language, that is not something you expect. Next to unit tests, a compiler error is your number one sign that you've made a mistake somewhere. In this case however, the code all compiled fine. The issue here was an older part of the code not using override combined with automated refactoring in CLion missing some parts of the code during a change. So, the issue in this case is entirely our own fault, it was spotted in the manual testing, but I'd rather had it not happen at all. In this post I'll describe the problem including some example code that illustrates what happened. My key point is that even though the code compiles, you should always test it, preferably automated with unit and integrations tests, otherwise manually with a runbook. Here's a screenshot of CLion's Refactoring -> Change Signature dialog: Refactoring char pointers to const std::string references In our refactoring efforts we're rewriting a large part of the code that handles text, strings if you will. Most texts come from a configuration file (binary xml), for example, the name of a consumption (Coffee Black). In the past this config was stored on a smartcard or burned into an EEPROM, which is why the texts and translations are embedded in the config. Nowadays we'd do that differently, but refactoring everything at once is a bad idea (Uncle Bob calls this the Big Redesign In The Sky), so we do it one small part
(read more)
Magit for VSCode, inspired by the awesome original Magit. Usage • Tutorial • Settings • Vim Bindings • Roadmap Keyboard driven Git interface Sub hunk staging (Theme: Dracula) Usage VSCode Command Default shortcut Magit Status alt+x g Magit File Popup alt+x alt+g Magit Dispatch alt+x ctrl+g Magit Help (when in status view) ? > Magit in VSCode Command palette will show you all available Magit actions from where you are. Keybindings inside edamagit Popup and dwim commands A Cherry-pick b Branch c Commit d Diff f Fetch F Pull I Ignore l Log m Merge M Remote P Push r Rebase t Tag V Revert X Reset y Show Refs z Stash shift+1 Run shift+5 Worktree o Submodules shift+4 Process Log Applying changes a Apply s Stage u Unstage v Reverse S Stage all U Unstage all k Discard Essential commands g refresh current buffer TAB toggle section at point RET visit thing at point shift+4 show git process view q exit / close magit view ctrl+j Move cursor to next entity ctrl+k Move cursor to previous entity [ See also the edamagit tutorial ] Settings Forge-enabled: Enable Forge functionality (show pull requests, issues, etc from e.g. Github) Display-buffer-function: Choose which side for magit
(read more)
GNU Guix 1.3.0 releasedLudovic Courtès, Maxim Cournoyer — May 11, 2021We are pleased to announce the release of GNU Guix version 1.3.0!The release comes with ISO-9660 installation images, a virtual machine image, and with tarballs to install the package manager on top of your GNU/Linux distro, either from source or from binaries. Guix users can update by running guix pull.It’s been almost 6 months since the last release, during which 212 people contributed code and packages, and a number of people contributed to other important tasks—code review, system administration, translation, web
(read more)
Measuring the performance of a program means keeping track of the consumption of resources used by the program.In addition to simple technical performance, such as looking closely at RAM and CPU, it is useful to monitor the execution time of a certain task. Tasks such as increasing sorting of a set of values can take a long time depending on the algorithm used.Before delving into optimizing an algorithm, it is useful to understand how to measure the execution time of a program. In this article, I will first introduce the concept of time and then we will explore some beginner techniques for mak
(read more)
This all started with a Mele PCG09 before testing Linux on this I took a quick look under Windows and the device-manager there showed an exclamation mark next to a Realtek 8723BS bluetooth device, so BT did not work. Under Linux I quickly found out why, the device actually uses a Broadcom Wifi/BT chipset attached over SDIO/an UART for the Wifi resp. BT parts. The UART connected BT part was described in the ACPI tables with a HID (Hardware-ID) of "OBDA8723", not good.Now I could have easily fixed this with an extra initrd with DSDT-overrride but that did not feel right. There was a
(read more)
date (X) $/Mbyte (Y) Date Ref: Page Company Size Cost Speed Memory Type JDR Chip Prices KByte US $ nsec Size Kbit US$ nsec 1957.00 411,041,792 1957 Phister 366 C.C.C. 0.00098 392.00 10000 transistor Flip-Flop 1959.00 67,947,725 1959 Phister 366 E.E.Co. 0.00098 64.80 10000 vacuum tube Flip-Flop 1960.00 5,242,880 1960 Phister 367 IBM 0.00098 5.00 11500 IBM 1401 core memory 1965.00 2,642,412 1965 Phister 367 IBM 0.00098 2.52 2000 I
(read more)
I’ve written a lot of C++ in my career, but I still prefer to design in C for most embedded projects (“why” is the subject of a much longer, rant-filled post). I’m not a big proponent of OOP in general, but I do think having an “instance” of something which contains stateful data is a generally useful thing for embedded software. For example, you may want to have several instances of a ring buffer (aka circular FIFO queue) on your system. Each instance contains stateful data, like the current position of read and write pointers. What’s the best way to model this in C? Objects are not a native concept in C, but you can achieve something resembling objects by using a design pattern known as the “opaque pointer”. This post will show you what the pattern is, expl
(read more)
New Major Versions Released! Flask 2.0, Werkzeug 2.0, Jinja 3.0, Click 8.0, ItsDangerous 2.0, and MarkupSafe 2.0 written by David Lord on 2021-05-11 in Releases The Pallets team is pleased to announce that the next major versions for our six core projects have been released! This represents two years of work by the Pallets team and community, there are a significant number of changes and exciting new features. Check out the logs for every project to see what's new. Flask depends on the five other libraries, be sure to read them all if you're upgrading Flask. Flask 2.0 Werkzeug 2.0 Jinja 3.0 Click 8.0 ItsDangerous 2.0 MarkupSafe 2.0 Installing and UpgradingInstall from PyPI with pip. For example, for
(read more)
“Types help you reason about effects”, we declare. And they do! Except when they don’t. “Just follow the types!” we insist. But sometimes the types take you down a garden path. When the type checker is happy but the behaviour is all wrong, it can be hard to find where you took the wrong turn. In this post I’ll share real-world examples of this phenomenon, and offer some tips on how to avoid it. Random generation of applicatives § The Applicative type class provides a function for lifting a “pure” value into the applicative data type: class Applicative (k :: * -> *) where pure :: a -> k a (<*>) :: k (a -> b) -> k a -> k b Assume we have a random generator of values of type a, and wish to generate random applicatives. The shape of this problem is: genAp :: (Applica
(read more)
%PDF-1.5 %���� 48 0 obj <> endobj xref 48 16 0000000016 00000 n 0000000913 00000 n 0000000616 00000 n 0000000975 00000 n 0000001104 00000 n 0000001207 00000 n 0000001288 00000 n 0000001368 00000 n 0000001445 00000 n 0000001527 00000 n 0000003308 00000 n 0000006003 00000 n 0000006146 00000 n 0000006449 00000 n 0000010667 00000 n 0000010963 00000 n trailer <<5fdb1b7beccc11d8af18000a95da505c>]>> startxref 0 %%EOF 50 0 obj<>stream x�b```f``Z�����z�����b�@̱�m?�)��-�Ӝ\<'��h�������o�K�z���ճg� �f�p�8���9�(����%]��*ۜ:�6�0�V�&��Q7��H�!4l�i���@$*�[email protected]�\Ā�YBCÀ��Vׁ�308�y�X l�/�c0��
(read more)
May 11, 2021 There’s a clear answer to Babel’s funding woes: sell the new version of Babel, Babel 8. At least until Babel 9 comes out. Then sell Babel 9. The only problem with monetary donation-based funding for independent open source projects is that it doesn’t work. It feels good. It avoids harshing “community” vibes. It’s easy to set up these days. And money sometimes spurts in at the start, which is very reassuring up from nil, especially if you get a nice “starter” grant. But the initial enthusiasm wears off. The dollars flatten out. Donations, as a rule, do not yield enough money over enough time to fund even very important, notable projects that require ongoing effort, outside the corporate fold. Especially proj
(read more)
October 2, 2010 — For the third time in my life I've gotten a Macbook. The previous two times I returned them. I'm hoping the third time is a charm. After a week I'm finally starting to get used to it. I'm starting to really customize it and that seems to make all the difference. I've removed nearly everything from the dock, added a bunch of things to my bash profile, and a friend at work gave me a crash course in Textmate. I've downloaded a program called Sizeup, which not only emulates the awesome "Snap" feature of Windows 7, but in fact is even better. It gives you the left/right 50% snap feature just like in Win7, but also lets you snap things into top/bottom and even snap things into quadrants. Pretty nice software. Firefox is my favorite of the 3 browsers on the Mac. I l
(read more)
Ada is an ISO-standardized programming language focusing on readability, correctness and reliability. Towards these goals it focuses on explicitness, strong typing, compile time and run time constraints on types, and minimum symbology. Ada proves itself in reliability with a track record of nearly four decades of usage in embedded, safety, and critical systems. Over this timeframe, Ada was updated three times, each time with a new Reference Manual, a more in-depth Annotated Reference Manual, and a Rationale document, describing the reasoning for each feature. Backing each of these changes is the Ada Conformity Assessment Test Suite (ACATS), a battery of freely available tests to help Ada compilers or interprets properly interpret the standard. Ada 2012 tak
(read more)
This post is about performance techniques, so I hope you won’t mind that the site in question is not quite finished.Edit: it’s finished!But you need something to click on so you can decide whether you value my opinion or not, so here you go: know-it-all.ioHopefully that opened quickly and I have established credibility. If that sounds immodest it’s because I’m awesome.Let me practice my pitch for the site: “have you ever wondered what you don’t know about the web? What hidden property or method or attribute has managed to evade your attention? Wouldn’t you like to go through a long list and tick off the stuff you know, to be left with a glorious summary of things to go and learn?”If you’re wondering why I’m writing about this site when it’s not finished… because fee
(read more)
camlboot is an experiment on the boostraping of the OCaml compiler. It is composed of: An interpreter of OCaml, in the directory interpreter/, which is able to interpret the OCaml compiler. This interpreter is written in a subset of OCaml called miniml, for which a compiler is available as part of the experiment. A compiler for miniml, in the directory miniml/compiler/. This compiler compiles miniml to OCaml bytecode, which is then executed by the OCaml runtime. It is written in scheme (more specifically, guile), since the goal is to bootstrap OCaml. Note that guile is itself bootstrapped directly from gcc, and building OCaml needs a C compiler as well, so we effectively bootstrap OCaml from gcc. A handwritten lexer for the bootstrapping of ocamllex, in the directory lex/. Thi
(read more)
2021-05-11 21:21 ToolchainsThis post is a spiritual successor to my "Building LLVM on OpenBSD/loongson" article, in which I retraced my attempts to build LLVM 3.7.1 on MIPS64 in a RAM constrained environment. After reading the excellent "Make LLVM fast again", I wanted to revisit the topic, and see how fast I could build a recent version of LLVM and Clang on modern x86 server hardware. The system I'm using for this experiment is a CCX62 instance from Hetzner, which has 48 dedicated vCPUs and 192 GB of RAM. This is the fastest machine available in their cloud offering at the moment. The system is running Fedora 34 with up-to-date packages and kernel. The full result of cat /proc/cpuinfo is available here. uname -a Linux benchmarks 5.11.18-300.fc34.x86_64 #1 SMP Mon May 3 15:10:3
(read more)
Today I’m launching a huge update to my suite of Swift static site generation tools — specifically a brand new version of Plot, the library that’s used to generate all of this website’s HTML, which adds a new API for building HTML components in a very SwiftUI-like way.This new version has been in the works for over a year, and has been properly battle-tested in production. In fact, it was used to render the HTML for the article that you’re reading right now! So I couldn’t be more excited to now finally make it publicly available for the entire Swift community.Essential Developer: If you’re a mid/senior iOS developer who’s looking to improve both your skills and your salary level, then join this 100% free online crash course, starting on May 17th. Through a series of lecture
(read more)
Doomed Assault As in many gamedev stories, Poom should not have existed. It came to be possible on Pico8 thanks to a (still unpublished) project, an Assault demake - a game I used to play at arcades as a kid (dual stick! excellent music! ultra low bass explosions!). Early 2020, quite proud of my silky smooth rotozoom engine running entirely in memory, well below 50% cpu with enemy units. @Eniko & @Electricgryphon had already paved the way, I went slightly further: load #assault (requires a 0.1.12c version to run at full speed) Engine is looking good, time to export to HTML and demo it. I run a couple of tests on my home computers and mobiles...  It did not went well, performance was all over the place and required a powerful PC to run at full speed. Reporting the bug (?) to Zep (Joseph White, Pico8 author), it became apparent the game relied too much on binary operations and trashed the web player. The delicate balance of simulated API costs did not account for so many "low level" operations per frame. Too much binary ops? Nah... poke4( mem, bor( bor( bor(rotr(band(shl(m[bor(band(mx,0xffff), band(lshr(srcy,16),0x0.ffff))],shl(band(srcx,7),2)),0xf000),28), rotr(band(shl(m[bor(band(mx+mdx1,0xffff), band(lshr(srcy-ddy1,16),0x0.ffff))],shl(band(srcx+ddx1,7),2)),0xf000),24)), bor(rotr(band(shl(m[bor(band(mx+mdx2,0xffff), band(lshr(srcy-ddy2,16),0x0.ffff))],shl(band(srcx+ddx2,7),2)),0xf000),20), rotr(band(shl(m[bor(band(mx+mdx3,0xffff), band(lshr(srcy-ddy3,16),0x0.ffff))],shl(band(srcx+ddx3,7),2)),0xf000),16)) ), bor( bor(rotr(band(shl(m[bor
(read more)
hwj e-mailed 1 hour ago | link I don’t have a strong opinion on this, but
(read more)
The WhatsApp messaging app is displayed on an Apple iPhone on May 14, 2019 in San Anselmo, California. Facebook owned messaging app WhatsApp announced a cybersecurity breach that makes users vulnerable to malicious spyware installation iPhone and Android smartphones. WhatsApp is encouraging its 1.5 billion users to update the app as soon as possible.Justin Sullivan | Getty Images News | Getty ImagesLONDON — A German regulator ordered Facebook to stop processing data on its citizens from messaging service WhatsApp.The Hamburg Commissioner for Data Protection and Freedom of Information, or HmbBfDI, said Tuesday that it has issued an injunction that prevents Facebook from processing personal data from WhatsApp.Facebook said it is considering how to appeal the order.Mark Zuckerberg's social media giant has been looking for new ways to monetize WhatsApp, which is used by around 60 million people in Germany, ever since it acquired it for $19 billion in 2014.In the latest move, WhatsApp users worldwide have been invited to agree to new terms of use and privacy that gives the company wide-ranging powers to share data with Facebook.WhatsApp users are being told to agree to the new terms by May 15 if they want to continue using the app, which now competes with rivals like Signal and Telegram.The majority of users who have received the new terms of service and privacy policy have accepted the update, Facebook said.But the update isn't legal, according to Johannes Caspar, who leads the HmbBfDI. He has issued a three-month emergency order that prevents Facebook from continuing with WhatsApp data processing in Germany."The order is intended to safeguard the rights and freedoms of the many millions of users throughout Germany who give their consent to the terms of use," he said in a statement. "It is important to prevent disadvantages and damages associated with such a black box procedure."Caspar said the Cambridge Analytica scandal and the data leak that affected more than 500 million Facebook users "show the scale and dangers posed by mass profiling," adding that profiles can be used to manipulate democratic decisions."The order now issued refers to the further processing
(read more)
WIP: Works, but there is still a lot rough edges Local development HTTPS proxy server meant to simplify working with multi-domain applications by serving each application on separate domai
(read more)
gil_load is a utility for measuring the fraction of time the CPython GIL (Global interpreter lock) is held or waited for. It is for linux only, and has been tested on Python 2.7, 3.5, 3.6 and 3.7. Installation Introduction Usage Functions Installation to install gil_load, run: $ sudo pip3 install gil_load or to install from source: $ sudo python3 setup.py install gil_load can also be installed with Python 2. Introduction A lot of people complain about the Python GIL, saying that it prevents them from utilising all cores on their expensive CPUs. In my experience this claim is more often than not without merit. This module was motivated by the desire to demonstrate that typical parallel code in Python, such as numerical calculations using numpy, does not suffer from high GIL contention and is truly parallel and utilising all cores. However, in other circumstances where the GIL is contested, this module can tell you how contested it is, which threads are hogging the GIL and which are starved. Usage In your code, call gil_load.init() before starting any threads. When you wish to begin monitoring, call gil_load.start(). When you want to stop monitoring, call gil_load.stop(). You can thus monitor a small segment of code, which is useful if your program is idle most of the time and you only need to profile when something is actually happening. Multiple calls to gil_load.start() and gil_load.stop() can accumulate statistics over time. See the arguments of gil_load.start() for more details. You may either pass arguments to gil_load.start() configuring it to output monitoring results periodically to a file (such as sys.stdout), or you may manually collect statistics by calling gil_load.get(). For example, here is some code that runs four threads doing fast Fourier transforms with numpy: import numpy as np import threading import gil_load N_THREADS = 4 NPTS = 4096 gil_load.init() def do_some_work(): for i in range(2): x = np.random.randn(NPTS, NPTS) x[:] = np.fft.fft2(x).real gil_load.start() threads = [] for i in range(N_THREADS): thread = threading.Thread(target=do_some_work, daemon=True) threads.append(thread) threa
(read more)