Morning Edition

84231641-412b-4848-aeda-8f0ecf393de3

Designing a physics engine

Words: calvin - lobste.rs - 02:00 01-08-2020

By coincidence, right when The Cherno announced his game engine series I was just starting to get going on my own engine. I couldn’t wait to finally have a professional opinion on how to make one. With self-taught programming it’s hard to not doubt yourself constantly, wondering if you are doing things right or just think you are.

Recently, he has been posting videos about huge aspects of his engine like physics and entity systems, which were what I really wanted to learn about by making myself, but he ended up using libraries instead of going through the internals! I am not against using libraries, but to use them for the fun stuff? I felt like it defeated the point of making a custom engine series.

There is an argument to be made about saving time, but this was the first C++ project that I was making and the goal from the start was to go through all the major pillars of an engine: input, graphics, physics, entities, and audio. I wanted to learn how those things worked along with C++ and code design in general.

I bet that some other people are interested in the details of how these systems work, and I want to learn how to explain code better, so I am going to try and make some videos going over the internals of these systems. They end up being much simpler than at first glance.

Let’s start with the physics engine…

Physics engines are responsible for figuring out where each object in a scene is over time. Objects can collide with one another, then choose to respond in several ways. It’s a generic problem that the user can configure at several different levels. Do they want a collider? Do they want to respond to collisions? Do they want to simulate dynamics? They could want dynamics, but not gravity. It’s a problem that calls for good planning and robust design.

I looked at how bullet and box2d went about sorting their engines and concluded that the way bullet went about it was solid. I boiled it down to just what was needed, and based my design around that. There are already some great articles going over the hard math involved, so I am going to focus on the design aspect instead because I haven’t seen anyone do that, and it’s also a real headache.

At the current moment, this physics engine is not fully featured, but in future articles I plan to build it out further. This article will not cover rotation, multiple contact point collisions, or constrained simulation. I think it will work out for the best as it’s easy to get overwhelmed, and I want to ease into those topics. With that out of the way, let’s dive into the different parts of a physics engine.

The problem can be split into 2 or 3 pieces, dynamics, collision detection, and collision response. I’ll start with dynamics because it is by far the simplest.

Dynamics is all about calculating where the new positions of objects are based on their velocity and acceleration. In high school you learn about the four kinematic equations along with Newton's three laws which describe the motion of objects. We’ll only be using the first and third kinematic equations, the others are more useful for analysis of situations, not simulation. That leaves us with:

\Delta x = v_0t + \frac{1}{2}at^2

We can give ourselves more control by using Newtons 2nd law, subbing out acceleration giving us:

Each object needs to store these three properties: velocity, mass, and net force. Here we find the first decision we can make towards the design, net force could either be a list or a single vector. In school you make force diagrams and sum up the forces, implying that we should store a list. This would make it so you could set a force, but you would need to remove it later which could get annoying for the user. If we think about it further, net force is really the total force applied in a single frame, so we can use a vector and clear it at the end of each update. This allows the user to apply a force by adding it, but removing it is automatic. This shortens our code and gives a performance bump because there is no summation of forces, it’s a running total.

We’ll use this struct to store the object info for now.

struct Object {

vector3 Position ; // struct with 3 floats for x, y, z or i + j + k

vector3 Velocity ;

vector3 Force ;

float Mass ;

};

We need a way to keep track of the objects we want to update. A classic approach is to have a physics world that has list of objects and a step function that loops over each one. Let’s see how that might look; I’ll omit header/cpp files for brevity.

class PhysicsWorld {

private :

std :: vector < Object *> m_objects ;

vector3 m_gravity = vector3 ( 0 , - 9.81f , 0 );

public :

void AddObject   ( Object * object ) { /* ... */ }

void RemoveObject ( Object * object ) { /* ... */ }

void Step (

float dt )

{

for ( Object * obj : m_objects ) {

obj -> Force += obj -> Mass * m_gravity ; // apply a force

obj -> Velocity += obj -> Force / obj -> Mass * dt ;

obj -> Position += obj -> Velocity * dt ;

obj -> Force = vector3 ( 0 , 0 , 0 ); // reset net force at the end

}

}

};

Note the use of pointers, this forces other systems to take care of the actual storing of objects, leaving the physics engine to worry about physics, not memory allocation.

With this you can simulate all sorts of stuff from objects flying through the sky to solar systems.

You can do a lot with this, but it’s the easy part to be honest, and that’s not what you came for…

Collision detection is more involved, but we can lighten the load by using some clever tricks. Let’s think about what needs to be found first. If we look at some examples of objects colliding, we notice that in most cases there is a point on each shape that is furthest inside the other.

This turns out to be all we need to respond to a collision. From those two points we can find the normal, and how deep the objects are inside one another. This is huge because it means that we can abstract the idea of different shapes away, and only worry about the points in the response.

Let’s jump into the code, we’ll need some helper structs that I’ll note first.

struct CollisionPoints {

vector3 A ; // Furthest point of A into B

vector3 B ; // Furthest point of B into A

vector3 Normal ; // B – A normalized

float Depth ;    // Length of B – A

bool HasCollision ;

};

struct Transform { // Describes an objects location

vector3 Position ;

vector3 Scale ;

quaternion Rotation ;

};

Each shape will have a different type of collider to hold its properties and a base to allow them to be stored. Any type of collider should be able to test for a collision with any other type, so we’ll add functions in the base for each one. These functions will take Transforms, so the colliders can use relative coordinates. I’ll only demonstrate spheres and planes, but the code is repeatable for any number of colliders.

struct Collider {

virtual CollisionPoints TestCollision (

const Transform * transform ,

const Collider * collider ,

const Transform * colliderTransform ) const = 0 ;

virtual CollisionPoints TestCollision (

const Transform * transform ,

const SphereCollider * sphere ,

const Transform * sphereTransform ) const = 0 ;

virtual CollisionPoints TestCollision (

const Transform * transform ,

const PlaneCollider * plane ,

const Transform * planeTransform ) const = 0 ;

};

Let’s make both types of colliders at the same time too see how they interact. A sphere is defined as a point and a radius, and a plane is defined as a vector and a distance. We’ll override the functions from Collider, but won’t worry about the work for now.

We can choose per collider which other colliders it will detect by filling, or not filling, in these functions. In this case, we don’t want Plane v Plane collisions, so we return an empty CollisionPoints.

struct SphereCollider

: Collider

{

vector3 Center ;

float Radius ;

CollisionPoints TestCollision (

const Transform * transform ,

const Collider * collider ,

const Transform * colliderTransform ) const override

{

return collider -> TestCollision ( colliderTransform , this , transform );

}

CollisionPoints TestCollision (

const Transform * transform ,

const SphereCollider * sphere ,

const Transform * sphereTransform ) const override

{

return algo :: FindSphereSphereCollisionPoints (

this , transform , sphere , sphereTransform );

}

CollisionPoints TestCollision (

const Transform * transform ,

const PlaneCollider * plane ,

const Transform * planeTransform ) const override

{

return algo :: FindSpherePlaneCollisionPoints (

this , transform , plane , planeTransform );

}

};

We can add a function for testing the base and use a technique called double dispatch. This takes advantage of the type system to determine both types of colliders for us by swapping the arguments, determining the first, then the second type through two calls of TestCollision. This saves us needing to know what type of colliders we are checking, which means we’ve fully abstracted away the notion of different shapes outside the collision detection.

struct PlaneCollider

: Collider

{

vector3 Plane ;

float Distance ;

CollisionPoints TestCollision (

const Transform * transform ,

const Collider * collider ,

const Transform * colliderTransform ) const override

{

return collider -> TestCollision ( colliderTransform , this , transform );

}

CollisionPoints TestCollision (

const Transform * transform ,

const SphereCollider * sphere ,

const Transform * sphereTransform ) const override

{

// reuse sphere code

return sphere -> TestCollision ( sphereTransform , this , transform );

}

CollisionPoints TestCollision (

const Transform * transform ,

const PlaneCollider * plane ,

const Transform * planeTransform ) const override

{

return {}; // No plane v plane

}

};

In cases like this, where there are many classes with a web of similar functions, it can be confusing as to where the actual code is located. Sphere v Sphere would obviously be in the Sphere.cpp file, but Sphere v Plane could be in either Sphere.cpp or Plane.cpp, there is no way to know without hunting, which gets annoying when there are many files.

To get around this lets let’s make an algo namespace and put the actual work in there. We’ll need a function for each pair of colliders we want to be able to check. I’ve made a Sphere v Sphere, Sphere v Plane, but not Plane v Plane because it’s not so useful. I’m not going to cover these functions here because they are not part of the design per se, but you can check out the source if you are interested.

namespace algo {

CollisionPoints FindSphereSphereCollisionPoints (

const SphereCollider * a , const Transform * ta ,

const SphereCollider * b , const Transform * tb );

CollisionPoints FindSpherePlaneCollisionPoints (

const SphereCollider * a , const Transform * ta ,

const PlaneCollider * b , const Transform * tb );

}

You can use these colliders on their own, but most likely want to attach one to an object. We’ll replace Position with a Transform in the Object. We are still only using position in the dynamics but can use scale and rotation in the collision detection. There is a tricky decision to make here. I’m going to use a Transform pointer for now, but we’ll come back to this at the end and see why that might not be the best choice.

struct Object {

float Mass ;

vector3 Velocity ;

vector3 Force ;

Collider * Collider ;

Transform * Transform ;

};

A good design practice is to separate all the different aspects of complex functions like Step into their own. This makes the code much more readable, so let’s add another function named ResolveCollisions in the physics world.

struct Collision {

Object * ObjA ;

Object * ObjB ;

CollisionPoints Points ;

};

Again, we have the physics world, I’ll compact the parts we have already looked at but it’s nice to have context.

class PhysicsWorld {

private :

std :: vector < Object *> m_objects ;

vector3 m_gravity = vector3 ( 0 , - 9.81f , 0 );

public :

void AddObject   ( Object * object ) { /* ... */ }

void RemoveObject ( Object * object ) { /* ... */ }

void Step (

float dt )

{

ResolveCollisions ( dt );

for ( Object * obj : m_objects ) { /* ... */ }

}

void ResolveCollisions (

float dt )

{

std :: vector < Collision > collisions ;

for ( Object * a : m_objects ) {

for ( Object * b : m_objects ) {

if ( a == b ) break ;

if (    ! a -> Collider

|| ! b -> Collider )

{

continue ;

}

CollisionPoints points = a -> Collider -> TestCollision (

a -> Transform ,

b -> Collider ,

b -> Transform );

if ( points . HasCollision ) {

collisions . emplace_back ( a , b , points );

}

}

}

// Solve collisions

}

};

This is looking good, because of that double dispatch there is no need for anything other than a single call to TestCollision. Using a break in the for loop gives us the unique pairs, so we never check the same objects twice.

There is only one annoying caveat which is that because the order of the objects is unknown, sometimes you will get a Sphere v Plane check, but others will be a Plane v Sphere check. If we just called the algo function for Sphere v Plane, we would get the reverse answer, so we need to add some code in the plane collider to swap the order of the CollisionPoints.

CollisionPoints PlaneCollider :: TestCollision (

const Transform * transform ,

const SphereCollider * sphere ,

const Transform * sphereTransform ) const

{

// reuse sphere code

CollisionPoints points = sphere -> TestCollision ( sphereTransform , this , transform );

vector3 T = points . A ; // You could have an algo Plane v Sphere to do the swap

points . A = points . B ;

points . B = T ;

points . Normal = - points . Normal ;

return points ;

}

Now that we have detected a collision, we need some way to react to it.

Because we have abstracted away the idea of different shapes into points, the collision response is almost pure math. The design is relatively simple compared to what we just went through; we’ll start with the idea of a solver. A solver is used to solve things about the physics world. That could be the impulse from a collision or raw position correction, really anything you choose to implement.

Let’s start with an interface.

class Solver {

public :

virtual void Solve (

std :: vector < Collision >& collisions ,

float dt ) = 0 ;

};

We’ll need another list in the physics world to store these, and functions to add and remove them. After we generate our list of collisions, we can feed it to each solver.

class PhysicsWorld {

private :

std :: vector < Object *> m_objects ;

std :: vector < Solver *> m_solvers ;

vector3 m_gravity = vector3 ( 0 , - 9.81f , 0 );

public :

void AddObject   ( Object * object ) { /* ... */ }

void RemoveObject ( Object * object ) { /* ... */ }

void AddSolver   ( Solver * solver ) { /* ... */ }

void RemoveSolver ( Solver * solver ) { /* ... */ }

void Step ( float dt ) { /* ... */ }

void ResolveCollisions (

float dt )

{

std :: vector < Collision > collisions ;

for ( Object * a : m_objects ) { /* ... */ }

for ( Solver * solver : m_solvers ) {

solver -> Solve ( collisions , dt );

}

}

};

In the last section the meat was in the design, this one leans much more towards what kinds of solvers you implement. I’ve made an impulse & position solver myself that seem to work for most situations. To keep this short, I won’t cover the math here, but you can check out the source for the impulse solver here, and the position solver here if you are interested.

The real power of a physics engines comes from the options that you give to the user. In this example there aren’t too many that can be changed, but we can start to think about the different options we want to add. In most games you want a mix of objects, some that simulate dynamics, and others that are static obstacles. There is also a need for triggers, objects that don’t go through the collision response, but fire off events for exterior systems to react to, like an end of level flag. Let’s go through some minor edits we can make to allow these settings to be easily configured.

The biggest change we can make is to distinguish between objects that simulate dynamics and ones that don’t. Because of how many more settings a dynamic object needs, let’s separate those out from what is necessary for collision detection. We can split Object into CollisionObject and Rigidbody structs. We’ll make Rigidbody inherit from CollisionObject to reuse the collider properties and allow us to store both types easily.

We are left with these two structs. A dynamic_cast could be used to figure out if a CollisionObject is really a Rigidbody, but will make code slightly longer, so I like to add a boolean flag even through it’s not considered best practice. We can also add a flag for the object to act as a trigger and a function for a callback. While we’re at it, let’s beef up the security by protecting the raw values.

struct CollisionObject {

protected :

Transform * m_transform ;

Collider * m_collider ;

bool m_isTrigger ;

bool m_isDynamic ;

std :: function < void ( Collision &, float )> m_onCollision ;

public :

// getters & setters, no setter for isDynamic

};

We can add many more settings to the Rigidbody. It’s useful if each object has its own gravity, friction, and bounciness. This opens the door to all sorts of physics based effects. In a game you could have an ability that changes the gravity in an area for a time. You could have some objects be bouncy and other like weight balls. A floor could be made of ice and be slippy for a harder challenge.

struct Rigidbody

: CollisionObject

{

private :

vector3 m_gravity ;  // Gravitational acceleration

vector3 m_force ;    // Net force

vector3 m_velocity ;

float m_mass ;

bool m_takesGravity ; // If the rigidbody will take gravity from the world.

float m_staticFriction ;  // Static friction coefficient

float m_dynamicFriction ; // Dynamic friction coefficient

float m_restitution ;     // Elasticity of collisions (bounciness)

public :

// getters & setters

};

Let’s split the PhysicsWorld into a CollisionWorld and a DynamicsWorld as well. We can move the Step function into the DynamicsWorld, and ResolveCollisions into the CollisionWorld. This saves someone who doesn’t want dynamics from sifting through functions that are useless to them.

We can make some edits to ResolveCollisions function to give triggers their correct functionality. Let’s split the function into its parts to keep it readable. Adding a callback to the world can be useful too if you want program wide events.

class CollisionWorld {

protected :

std :: vector < CollisionObject *> m_objects ;

std :: vector < Solver *> m_solvers ;

std :: function < void ( Collision &, float )> m_onCollision ;

public :

void AddCollisionObject   ( CollisionObject * object ) { /* ... */ }

void RemoveCollisionObject ( CollisionObject * object ) { /* ... */ }

void AddSolver   ( Solver * solver ) { /* ... */ }

void RemoveSolver ( Solver * solver ) { /* ... */ }

void SetCollisionCallback ( std :: function < void ( Collision &, float )>& callback ) { /* ... */ }

void SolveCollisions (

std :: vector < Collision >& collisions ,

float dt )

{

for ( Solver * solver : m_solvers ) {

solver -> Solve ( collisions , dt );

}

}

void SendCollisionCallbacks (

std :: vector < Collision >& collisions ,

float dt )

{

for ( Collision & collision : collisions ) {

m_onCollision ( collision , dt ) ;

auto & a = collision . ObjA -> OnCollision ();

auto & b = collision . ObjB -> OnCollision ();

if ( a ) a ( collision , dt ) ;

if ( b ) b ( collision , dt ) ;

}

}

void ResolveCollisions (

float dt )

{

std :: vector < Collision > collisions ;

std :: vector < Collision > triggers ;

for ( CollisionObject * a : m_objects ) {

for ( CollisionObject * b : m_objects ) {

if ( a == b ) break ;

if (    ! a -> Col ()

|| ! b -> Col ())

{

continue ;

}

CollisionPoints points = a -> Col ()-> TestCollision (

a -> Trans (),

b -> Col (),

b -> Trans ());

if ( points . HasCollision ) {

if (    a -> IsTrigger ()

|| b -> IsTrigger ())

{

triggers . emplace_back ( a , b , points );

}

else {

collisions . emplace_back ( a , b , points );

}

}

}

}

SolveCollisions ( collisions , dt ); // Don't solve triggers

SendCollisionCallbacks ( collisions , dt );

SendCollisionCallbacks ( triggers , dt );

}

};

To keep the Step function readable, let’s split it up into pieces as well.

class DynamicsWorld

: public CollisionWorld

{

private :

vector3 m_gravity = vector3 ( 0 , - 9.81f , 0 );

public :

void AddRigidbody (

Rigidbody * rigidbody )

{

if ( rigidbody -> TakesGravity ()) {

rigidbody -> SetGravity ( m_gravity );

}

AddCollisionObject ( rigidbody );

}

void ApplyGravity () {

for ( CollisionObject * object : m_objects) {

if (! object -> IsDynamic ()) continue ;

Rigidbody * rigidbody = ( Rigidbody *) object ;

rigidbody -> ApplyForce ( rigidbody -> Gravity () * rigidbody -> Mass ());

}

}

void MoveObjects (

float dt )

{

for ( CollisionObject * object : m_objects) {

if (! object -> IsDynamic ()) continue ;

Rigidbody * rigidbody = ( Rigidbody *) object ;

vector3 vel = rigidbody -> Velocity ()

+ rigidbody -> Force () / rigidbody -> Mass ()

* dt ;

vector3 pos = rigidbody -> Position ()

+ rigidbody -> Velocity ()

* dt ;

rigidbody -> SetVelocity ( vel );

rigidbody -> SetPosition ( pos );

rigidbody -> SetForce ( vector3 ( 0 , 0 , 0 ));

}

}

void Step (

float dt )

{

ApplyGravity ();

ResolveCollisions ( dt );

MoveObjects ( dt );

}

};

Now we have a whole stack of options that the user can configure for many different scenarios with a simple yet powerful API.

There is one more option that I want to cover. The physics world has no need updating every frame. Say a game like CS:GO gets rendered at 300 fps. It’s not checking the physics every frame; it might run at 50 Hz instead. If the game only used the positions from the physics engine, objects would have .02 seconds between each update, causing a jittery look. And that’s an ideal rate, some games will only update at 20 Hz leaving .05 seconds between update!

To get around this, it is common to split the physics world from the rendered world. This is simply done by using a raw Transform instead of a pointer and having a system outside the physics interpolate the position every frame. Let’s see how we might implement this.

First, we’ll get rid of that pointer. We’ll need to add a last transform as well which will gets set just before the update in MoveObjects.

struct CollisionObject {

protected :

Transform m_transform ;

Transform m_lastTransform ;

Collider * m_collider ;

bool m_isTrigger ;

bool m_isStatic ;

bool m_isDynamic ;

std :: function < void ( Collision &, float )> m_onCollision ;

public :

// Getters & setters for everything, no setter for isDynamic

};

Because we used getters and setters, this won’t break any code outside the CollisionObject. We can make an exterior system that keeps track of how far it is into the physics update and use a linear interpolation between the last and current position. I’m not going to go into where to put this system, but it should update every frame rather than every physics update.

class PhysicsSmoothStepSystem {

private :

float accumulator = 0.0f ;

public :

void Update () {

for ( Entity entity : GetAllPhysicsEntities ()) {

Transform *       transform = entity . Get < Transform >();

CollisionObject * object    = entity . Get < CollisionObject >();

Transform & last    = object -> LastTransform ();

Transform & current = object -> Transform ();

transform -> Position = lerp (

last . Position ,

current . Position ,

accumulator / PhysicsUpdateRate ()

);

}

accumulator += FrameDeltaTime ();

}

void PhysicsUpdate () {

accumulator = 0.0f ;

}

};

This system smoothy moves the objects between their positions in the physics engines every frame, removing all jittery artifacts from the movement.

And that’s the final result. I hope you can use the principles from this article to get a better idea of how to lay out complex systems in nimble ways, or even make your own engine. There is a lot more to cover, but I’ll leave it to a part 2 because this is getting long. Let me know what you thought, should I keep focusing on design, or dive deeper into the math behind the implementations?

Thanks for reading, I hope to catch you next time!

(read more)

Coronavirus: How does contact tracing work?

Words: - BBC News - 08:32 17-07-2020

People who have been in close contact with someone found to have Covid-19 are now being traced and asked to self-isolate for a fortnight.

But what happens when someone who went to the same pub or restaurant as you tests positive?

If you develop coronavirus symptoms and test positive, you'll be contacted by text, email or phone and asked to log on to the NHS Test and Trace website.

There you must give personal information, including:

Contact must have taken place within a nine-day period, starting 48 hours before symptoms appeared.

No-one contacted will be told your identity.

A parent or guardian must give permission for a call with under-18s to continue.

Organisations in certain sectors - like pubs and restaurants - must collect and keep customer details for 21 days.

It's because there's a higher risk of transmitting Covid-19 in public places - both inside and outdoors - where people are near others they don't live with.

Giving personal information is voluntary and it's not the venue's responsibility to ensure it's correct.

Possibly, but not necessarily.

Official government advice says an NHS Test and Trace call does not always mean a pub or restaurant must close.

It depends on the circumstances and when the infected person visited.

NHS Test and Trace could ask staff to:

Local health protection officials have the power to close establishments.

A few pubs in England, which reopened on 4 July, have closed after customers tested positive.

Anyone deemed at risk of infection must stay at home for 14 days from their point of contact with the infected person.

You must self-isolate, even if you don't have symptoms, to prevent the virus spreading.

You should order food or medicine online or by phone, or ask friends and family to leave it on your doorstep.

Other people you live with won't have to self-isolate, unless they also develop symptoms, but they must take extra care around you regarding social distancing and hand washing.

It's currently voluntary. But the Department of Health says it may check up on people and could impose fines.

The scheme - which the prime minister claimed would be "world-beating" - was launched on 28 May. From then until 8 July:

The proportion of people reached and asked to self-isolate has been falling since the programme launched.

Percentage of contacts traced Sage, which advises the government, has said that at least 80% of contacts would need to isolate for the test and trace system to be effective.

While the overall figure is above 80%, figures for the last three weeks have been below that level.

Plans for an app were abandoned in mid-June, with ministers now working towards a model based on Apple and Google technology.

Thousands of contact tracers have been hired to get in touch with people

Northern Ireland's contact tracing operates exclusively by phone.

Scotland's system is called NHS Test and Protect, while Wales's ''test, trace, protect'' system launched on 1 June.

England's NHS Test and Trace service will only call from 0300 0135 000. It will not ask you:

If people can't work from home, the government says employers must ensure any self-isolating employee receives sick pay and let them use paid leave days if they prefer.

Self-employed people who are self-isolating but can't work from home can apply for an income support scheme grant, the government says, although this is designed to cover three months' worth of profits.

(read more)

Coronavirus: Am I eligible for a test?

Words: - BBC News - 01:23 07-07-2020

Anyone with symptoms can apply for a test to see if they have coronavirus.

Getting tested - and then tracing people's contacts - is considered vital to enable health experts to contain local outbreaks.

Tests are now available to all adults and most children in the UK with a fever, a new continuous cough or a loss of smell or taste.

In England and Wales you can apply for a swab test for yourself, or for anyone in your household, if you or they have symptoms

In Northern Ireland and Scotland anyone over the age of five with symptoms can get tested.

The tests are generally the same for children and adults.

This test to see if you currently have the virus involves taking a swab up the nose and the back of the throat.

This can be done by the person themselves or someone else.

But these tests won't show if you have had Covid-19 in the past.

Antibody tests - which do look for evidence of past exposure - use blood samples.

The UK now has capacity for about 80,000 antibody tests a day, but these are only offered to health and care staff and should only be carried out by a healthcare professional.

They are also used to test random samples of people to estimate the level of exposure across the country.

Testing is essential if contact-tracing systems now in place across the UK are to work effectively, help stop the spread of the virus and avoid the need for UK-wide lockdowns.

And in theory it can help people, including NHS workers, know whether they are safe to go to work.

Testing can also let the health service plan for extra demand, and inform government decisions around social distancing.

Staff and residents in care homes will start receiving regular coronavirus tests from the week of 6 July.

People working or living in care homes have been able to be tested even if they don't have symptoms since the end of April - but not routinely.

There have been calls for hospital staff to also be routinely tested regularly, but a letter sent to hospital bosses indicated this is not the current plan for NHS staff.

Scientists at the University of Bristol believe 20% of positive cases could falsely appear as negative, wrongly telling someone they are not infected.

This can be because the swab sample wasn't good enough, the stage of infection someone's at when tested, or problems in the lab.

The Hospital Consultants and Specialists Association (HCSA), which represents hospital doctors, has called for NHS staff to be tested more than once.

Prime Minister Boris Johnson has pledged tests would be processed within 24 hours by the end of June, except where there were difficulties with the post.

Data up to 1 July showed that 90% of in-person tests carried out by mobile units and regional sites were turned around in 24 hours.

Speed is important because delays give the virus more time to spread.

The total capacity for the number of tests that can be done each day is now close to 300,000.

But this includes kits posted out to homes - some of which may never be returned - as well as those carried out at drive-thru centres.

The total also counts antibody tests and those carried out as part of a surveillance study by the Office for National Statistics, designed to give an idea of how many people have Covid-19, with and without symptoms, in the community.

During the coronavirus epidemic, the government has been challenged over its testing capacity and the data it has presented.

The government has now changed the way it sets out testing data.

There are several options.

You can travel to a drive-through testing site, visit a mobile testing unit or get a home testing kit delivered.

Testing at an NHS facility, such as a hospital, is available for patients and some NHS workers.

Once someone tests positive for Covid-19, they will be told to self-isolate for seven days - and their recent close contacts will be traced and told to isolate for 14 days by their nation's test and trace service, even if they don't have symptoms.

Close contacts include household members and anyone who has been within 2m (6ft) of the positive person for more than 15 minutes.

Read more about contact tracing in England, Scotland, Wales and Northern Ireland.

Follow Rachel on Twitter

Have you been tested? Or are you waiting for a test?

Please include a contact number if you are willing to speak to a BBC journalist about your experience.

(read more)

Coronavirus: What happens when the furlough scheme ends?

Words: - BBC News - 12:56 08-07-2020

More than nine million workers who are unable to do their job because of the coronavirus outbreak have had their wages paid by the government.

The furlough scheme was designed to help people put on leave because of the outbreak, and prevent mass redundancies.

Firms start paying towards the scheme from August. It will close in October, with employers receiving a £1,000 bonus for every furloughed worker they keep on until January 2021.

So, if you've been put on furlough, what are your rights?

Under the Coronavirus Jobs Retention Scheme, workers placed on leave will continue to receive 80% of their pay, up to a maximum of £2,500 a month.

These furloughed employees can now go back to work part-time.

For example, an employer could pay someone to work two days a week, while the furlough scheme would cover the other three days not worked.

Chancellor Rishi Sunak has said employers must start sharing the cost of the scheme from August

From 1 August, employers will have to pay National Insurance and pension contributions for their staff.

In September, employers will have to pay 10% of furloughed employees' salaries - rising to 20% in October.

Chancellor Rishi Sunak confirmed the scheme will close at the end of October, but has acknowledged this will be a ''difficult moment'' and that some jobs will not be protected.

Yes. Employees can be made redundant at any point during the scheme, and there are concerns there may be significant job losses when it ends.

To encourage job retention, the government will pay businesses a £1,000 bonus for every furloughed employee they keep on until the end of January. These workers must be paid an average of at least £520 a month between November and January.

If a worker does lose their job and is entitled to redundancy pay then this is calculated based on the amount they earned before furlough.

Firms can't use the money from furlough to subsidise redundancy packages. If you're made redundant while on furlough because your firm has gone bust, you can apply for payments from the Insolvency Service.

When the scheme began, furloughed staff weren't able to do any work for their employer. However, they can now be brought back to work on a part-time basis.

This coincides with the reopening of the hospitality sector in England, which was particularly badly hit by the lockdown. Some furloughed staff will be able to return to work at pubs, bars and restaurants when they open to the public in England on 4 July.

In the meantime, those on furlough can volunteer in the community or even for their company as long as they aren't creating revenue or providing a service.

Furloughed workers can volunteer in the community

Employers can give employees additional training, but must top up furlough payments if they do not reach minimum wage for the period spent doing the training.

If you work for more than one firm, you can receive furlough from any of them, up to £2,500 a month per employer.

You can continue working for any that still need you or start working for a new employer, provided you are not breaching any existing contracts.

The take-up has been significant, with 9.3 million workers furloughed since March.

Employers had made £25.5bn of furlough claims by 28 June, and the scheme will cost the government an estimated £80bn in total.

The scheme covers full-time, part-time, flexible, zero-hour and agency workers if they were on their employer's PAYE payroll on 19 March 2020.

Workers must be furloughed for at least three weeks, and can be furloughed more than once.

Employers can use an online calculator to work out how much to claim

Check you are eligible for the scheme, and then work out how much to claim using the government's online calculator.

Companies can claim 80% of their employees' wages - capped at £2,500 per employee per month before tax, or £576.92 a week.

Employers can top up this pay if they wish, and must let workers know they have been furloughed.

If employees' pay varies each month, companies will need to calculate the claim manually, or seek professional advice.

Furlough covers overtime and commission payments built into an employee's salary, but not discretionary payments such as tips or optional bonuses.

HMRC will check the claim, and pay you through a UK bank within about six working days.

Apprentices can be furloughed and continue their training

Any UK organisation with employees can apply, but most claims are from private sector businesses and charities.

Apprentices can also be furloughed and continue their training. An individual can furlough an employee, such as a nanny, if they are paid through PAYE.

The self-employed who are adversely affected by the virus are eligible for a taxable grant of up to 80% of their average monthly profit, if they meet certain conditions.

They must have been self-employed since at least the start of April 2019, and earn an average of less than £50,000 in a tax year. Those who receive it can continue to work.

The grant is offered as a one-off payment covering three months, up to a maximum of £7,500. Applications for a second grant will open in August.

Anyone on furlough retains the same employment rights. If you are ill you are eligible for statutory sick pay or can be placed on furlough.

If you are on unpaid leave, shielding or have caring responsibilities, you are also eligible. Staff on parental leave will still receive statutory pay from the government.

Employers do not have to top up salaries that no longer reach the minimum wage.

(read more)

Coronavirus: How can I use the 'eat out to help out scheme'?

Words: - BBC News - 12:30 28-07-2020

Diners will soon be able to get money off their bill on certain days in August to encourage a return to cafes, pubs and restaurants.

It's hoped the ''eat out to help out'' scheme will provide a boost to the struggling hospitality industry, now that the national lockdown is easing.

But the scheme, which has been launched alongside the government's healthy eating strategy, has faced criticism from some anti-obesity campaigners.

The promotion gives people a discount of up to 50% when eating or drinking soft drinks in a participating restaurant or other food establishment.

It is valid all day Monday, Tuesday and Wednesday from 3 to 31 August, in all parts of the UK that are not in a local lockdown.

The maximum discount available is £10 per person when you eat or drink in.

Restaurants have started to reopen again

Food and drink will appear on the menu at full price, and the restaurant will deduct the money off the bill and claim it back from the government.

The discount is only available on food and drink that you intend to consume on the premises, and can be used as many times as you like.

There is no limit on how many people can use the discount in one party, and it includes children.

Participating venues are supposed to offer the full 50% discount all day Monday to Wednesday and across the whole food and soft drink menu.

There's no minimum spend and you don't have to order food to be eligible, for example a £3 coffee would cost £1.50 under the scheme.

The offer can be used in combination with any other promotions and discounts being offered by the venue.

More than 53,000 businesses have signed up to the promotion, which covers participating:

Chancellor Rishi Sunak introduced the scheme to help get pubs and restaurants back on their feet

Lots of local, independent pubs restaurants and cafes are taking part, as well as big chains.

Establishments can choose whether to sign up, and can join the scheme at any point. They need to register online and will be able to claim the money back and have it refunded within five working days,

To be eligible they must have a designated dining or drinking area, or access to one, and have been registered with their local authority since at least 7 July 2020. Businesses that have used the furlough scheme can apply.

A full list of places taking part has been published, allow people to search for participating venues within a five-mile radius.

Chains that have signed up so far include:

The discount cannot be used on alcoholic drinks, service charges or food for a private function or event.

As the idea is to encourage people to eat in, establishments that are takeaway-only are not eligible.

Neither are catering services, bed and breakfasts or mobile food vans.

Businesses must have the facilities for people to dine in to take part, so venues offering informal seating in an area that does not belong to them are not included.

GETTY IMAGES UK hospitality industry

3rd largest UK employer in 2018

3.2 million workers in the sector

99% of hospitality businesses are SMEs

£130bn annual turnover in 2018

67% expect it will be "months" before going to a restaurant

Source: UK Hospitality, EY To help get struggling cafes, restaurants and pubs back on their feet.

Hospitality is one of the biggest employers in the UK and has been hit especially hard by the lockdown measures. In April 80% of venues closed, and 1.4 million hospitality workers have been placed on furlough, the highest proportion of any sector.

Some venues have been able to provide a takeaway service during lockdown. But this often means lower average spending per head and fewer people employed, and it is not an option for some businesses.

In August, when many premises will have reopened, the government hopes diners will be enticed in by the discount on offer. It also wants it to boost confidence in going out, and increase footfall at the quieter end of the week.

A recent survey suggested that many Britons felt uncomfortable about eating at a restaurant. The Office for National Statistics (ONS) said just over two-in-10 adults were happy to have a sit-down meal.

The scheme is being used alongside other targeted help such as a cut in VAT for hospitality and tourism businesses.

Several fast-food chains are taking part in the scheme, which has drawn criticism from some anti-obesity campaigners. The National Obesity Forum, for example, has said it would be a ''green light to promote junk food''.

The discount scheme sits alongside the government's healthy eating plan that bans "buy one get one free" deals on unhealthy food, amid growing evidence of a link between obesity and an increased risk from coronavirus.

When asked if the government was promoting mixed messages, Care Minister Helen Whately told LBC Radio that under the healthy eating plans, large chain restaurants will have to publish calorie breakdowns of their meals, helping diners to make an ''informed choice''.

(read more)

Conventions for Command Line Options

Words: calvin - lobste.rs - 01:42 01-08-2020

nullprogram.com/blog/2020/08/01/

Command line interfaces have varied throughout their brief history but

have largely converged to some common, sound conventions. The core

originates from unix, and the Linux ecosystem extended it,

particularly via the GNU project. Unfortunately some tools initially

appear to follow the conventions, but subtly get them wrong, usually

for no practical benefit. I believe in many cases the authors simply

didn’t know any better, so I’d like to review the conventions.

The simplest case is the short option flag. An option is a hyphen —

specifically HYPHEN-MINUS U+002D — followed by one alphanumeric

character. Capital letters are acceptable. The letters themselves have

conventional meanings and are worth following if possible.

program -a -b -c

Flags can be grouped together into one program argument. This is both

convenient and unambiguous. It’s also one of those often missed details

when programs use hand-coded argument parsers, and the lack of support

irritates me.

program -abc

program -acb

The next simplest case are short options that take arguments. The

argument follows the option.

program -i input.txt -o output.txt

The space is optional, so the option and argument can be packed together

into one program argument. Since the argument is required, this is still

unambiguous. This is another often-missed feature in hand-coded parsers.

program -iinput.txt -ooutput.txt

This does not prohibit grouping. When grouped, the option accepting an

argument must be last.

program -abco output.txt

program -abcooutput.txt

This technique is used to create another category, optional option

arguments. The option’s argument can be optional but still unambiguous

so long as the space is always omitted when the argument is present.

program -c # omitted

program -cblue # provided

program -c blue # omitted (blue is a new argument)

program -c -x # two separate flags

program -c-x # -c with argument "-x"

Optional option arguments should be used judiciously since they can be

surprising, but they have their uses.

Options can typically appear in any order — something parsers often

achieve via permutation — but non-options typically follow options.

program -a -b foo bar

program -b -a foo bar

GNU-style programs usually allow options and non-options to be mixed,

though I don’t consider this to be essential.

program -a foo -b bar

program foo -a -b bar

program foo bar -a -b

If a non-option looks like an option because it starts with a hyphen,

use -- to demarcate options from non-options.

program -a -b -- -x foo bar

An advantage of requiring that non-options follow options is that the

first non-option demarcates the two groups, so -- is less often

needed.

# note: without argument permutation

program -a -b foo -x bar # 2 options, 3 non-options

Since short options can be cryptic, and there are such a limited number

of them, more complex programs support long options. A long option

starts with two hyphens followed by one or more alphanumeric, lowercase

words. Hyphens separate words. Using two hyphens prevents long options

from being confused for grouped short options.

program --reverse --ignore-backups

Occasionally flags are paired with a mutually exclusive inverse flag

that begins with --no-. This avoids a future flag day where the

default is changed in the release that also adds the flag implementing

the original behavior.

program --sort

program --no-sort

Long options can similarly accept arguments.

program --output output.txt --block-size 1024

These may optionally be connected to the argument with an equals sign

=, much like omitting the space for a short option argument.

program --output=output.txt --block-size=1024

Like before, this opens up the doors for optional option arguments. Due

to the required = this is still unambiguous.

program --color --reverse

program --color=never --reverse

The -- retains its original behavior of disambiguating option-like

non-option arguments:

program --reverse -- --foo bar

Some programs, such as Git, have subcommands each with their own

options. The main program itself may still have its own options distinct

from subcommand options. The program’s options come before the

subcommand and subcommand options follow the subcommand. Options are

never permuted around the subcommand.

program -a -b -c subcommand -x -y -z

program -abc subcommand -xyz

Above, the -a, -b, and -c options are for program, and the

others are for subcommand. So, really, the subcommand is another

command line of its own.

There’s little excuse for not getting these conventions right assuming

you’re interested in following the conventions. Short options can be

parsed correctly in just ~60 lines of C code. Long options are

just slightly more complex.

GNU’s getopt_long() supports long option abbreviation — with no way to

disable it (!) — but this should be avoided.

Go’s flag package intentionally deviates from the conventions.

It only supports long option semantics, via a single hyphen. This makes

it impossible to support grouping even if all options are only one

letter. Also, the only way to combine option and argument into a single

command line argument is with =. It’s sound, but I miss both features

every time I write programs in Go. That’s why I wrote my own argument

parser. Not only does it have a nicer feature set, I like the API a

lot more, too.

Python’s primary option parsing library is argparse, and I just can’t

stand it. Despite appearing to follow convention, it actually breaks

convention and its behavior is unsound. For instance, the following

program has two options, --foo and --bar. The --foo option accepts

an optional argument, and the --bar option is a simple flag.

import argparse

import sys

parser = argparse . ArgumentParser ()

parser . add_argument ( '--foo' , type = str , nargs = '?' , default = 'X' )

parser . add_argument ( '--bar' , action = 'store_true' )

print ( parser . parse_args ( sys . argv [ 1 :]))

Here are some example runs:

$ python parse.py

Namespace(bar=False, foo='X')

$ python parse.py --foo

Namespace(bar=False, foo=None)

$ python parse.py --foo=arg

Namespace(bar=False, foo='arg')

$ python parse.py --bar --foo

Namespace(bar=True, foo=None)

$ python parse.py --foo arg

Namespace(bar=False, foo='arg')

Everything looks good except the last. If the --foo argument is

optional then why did it consume arg? What happens if I follow it with

--bar? Will it consume it as the argument?

$ python parse.py --foo --bar

Namespace(bar=True, foo=None)

Nope! Unlike arg, it left --bar alone, so instead of following the

unambiguous conventions, it has its own ambiguous semantics and attempts

to remedy them with a “smart” heuristic: “If an optional argument looks

like an option, then it must be an option!” Non-option arguments can

never follow an option with an optional argument, which makes that

feature pretty useless. Since argparse does not properly support --,

that does not help.

$ python parse.py --foo -- arg

usage: parse.py [-h] [--foo [FOO]] [--bar]

parse.py: error: unrecognized arguments: -- arg

Please, stick to the conventions unless you have really good reasons

to break them!

(read more)

Telegram messenger client for GNU Emacs (unofficial client)

Words: arh - lobste.rs - 18:25 31-07-2020

telega.el is full featured unofficial client for

Telegram platform for GNU

Emacs.

telega.el is actively developed, for this reason, some features are

not implemented, or they are present just as skeleton for future

implementation. However, the core parts are mature enough so that it

is possible to use telega.el for basic chat.

Chat in @emacs_en group:

telega.el depends on the visual-fill-column and rainbow-identifiers packages.

This dependency automatically installs if you install telega from MELPA

or GNU Guix. Otherwise will you need to install these packages by hand.

telega.el is built on top of the official library provided by

Telegram TDLib 1.6.6.

Most distributions do not provide this package in their repositories,

in which case you will have to install it manually by following the

instructions.

GNU Guix, however, does have both telega.el

and TDLib packaged. If you use GNU Guix you can skip directly to

Installing from GNU Guix.

make is found in most of the modern machines. The other packages can

be download with the system package manager (such as apt for

Debian-based distributions, dnf for Fedora or pacman for

Arch-based).

If you are using Emacs For Mac OS X,

or you installed Emacs by running brew cask install emacs, your

Emacs lacks svg support, so you cannot use telega. Please switch to

emacs-plus.

If you are using

Emacs-mac, or you

installed Emacs by running brew install emacs-mac or brew cask install emacs-mac, your Emacs has bug dealing with complex svg,

which leads to Emacs hangups. Compiling Emacs with rsvg support by running

brew install emacs-mac --with-rsvg will fix this problem.

NOTE: Telega cannot display stickers correctly with emacs-mac, even when

emacs-mac is compiled with rsvg support. If you want sticker support, please

consider switching to emacs-plus.

emacs-plus is

the best choice to run telega.

telega.el requires at least GNU Emacs 26.1 with imagemagick and

svg support. Most distributions provide GNU Emacs compiled with these

dependencies when installing GNU Emacs with GTK+ support (graphical).

TDLib is the library for

building Telegram clients. It requires a large amount of memory to be

built. Make sure you are using TDLib version 1.6.6.

On MacOS you can install a pre-built tdlib package using homebrew from

brew.sh. Just run:

$ brew install tdlib

On Linux, you will need to build tdlib from source.

$ git clone https://github.com/tdlib/td.git

Move into the folder with cd ./td or wherever you

checked out td.

Prepare a folder for building the library:

$ mkdir build build cmake ../p cd&&

$ make -jN

with N number of cores that should be used for the compilation (the optimal

value is the number of physical cores on the machine).

Finally, to install the library system-wide:

$ sudo make install

It will install headers to /usr/local/include and library itself

into /usr/local/lib. These paths are hardcoded in telega.el.

VoIP support in telega.el is optional, if you don't need VoIP, just

ignore this section.

libtgvoip is the

VoIP library for telegram clients. This is the fork from

original library with patches

needed by telega.el.

$ git clone https://github.com/zevlg/libtgvoip.git

Move into the folder with cd ./libtgvoip or wherever

you checked out libtgvoip.

Prepare a folder for building the library:

$ autoreconf --force --install ./configure makep&&

$ sudo make install

It will also install headers to /usr/local/include and library into

/usr/local/lib.

telega.el is available from MELPA, so you can install

it from there as usual package. This is a preferable method, because it

will automatically handle all dependencies. After installing telega.el from

MELPA you can skip to Fire up telega.el section.

Or you could use git repository with this melpa-style recipe:

Now that the TDLib library is set-up, it is time to install

telega.el. The first step consists in building telega-server,

which is a C interface to the TDLib, or just let telega ask you at

the first start and do the job (dependencies for compilation will need

to be installed ahead-of-time).

$ git clone https://github.com/zevlg/telega.el

Moving into the folder with cd telega.el, it is possible to build

the telega-server executable and move into the $HOME/.telega with:

$ make make install make p&&test

If you want VoIP support in telega.el and libtgvoip is installed,

then use this instead:

$ make WITH_VOIP=t make WITH_VOIP=t install make WITH_VOIP=t p&&test

This command does not require superuser privileges.

Start with M-x telega RET and follow instructions

Now it is time to install telega.el on GNU Emacs.

This can be done with use-package:

The code should be put in the configuration file for Emacs, which

usually is init.el, or emacs.el.

telega.el and tdlib are both available in GNU Guix. If you have a resource

constrained machine or would simply prefer to bypass compiling tdlib from

source, this is a good option!

$ guix package -i emacs-telega

Use the shell installer script, or install GNU Guix manually on-top of your

current distribution. Installation Documentation

Enable fetching substitutes from the build server cache if you do not

wish to build from source. Substitute Server Authorization

$ guix package -i emacs emacs-telega

You will need a version of emacs installed from GNU Guix because it is

modified with an autoloader to identify and automatically use emacs

packages installed from Guix.

Consult the official GNU Guix documentation for further questions. Issues related

to the GUIX package must be accompanied by the GUIX label

in the issue tracker.

Do note that since telega is actively maintained installations from Guix might

at times lag behind master, but regular attempts to keep it updated will occur.

If the version in Guix is too outdated or is missing a feature, please use the

protocol for the issue tracker.

telega.el can now be started with M-x telega RET. The first time

it will ask for the phone number you have associated with the Telegram

network.

See Minor Modes section in telega manual.

telega.el ships with support for D-Bus notifications, but they are

disabled by default. To enable notifications add next code to your

init.el:

Emoji completions with :<EMOJI-NAME>: syntax, uses nice

company-mode. It provides

telega-company-emoji company backend. So you need to add it to

company-backends, maybe along with other backends in

telega-chat-mode-hook, for example:

In official telegram clients all messages in group chats are displayed

even if message has been sent by blocked user. telega.el has client

side message filtering feature implemented. Ignoring messages can be

done via installing special functions into

telega-chat-insert-message-hook which could mark message as ignored,

for example, to ignore messages from particular user with id=12345 you

could add next code:

Or to ignore messages from blocked users, just add:

To view recent messages that has been ignored use

M-x telega-ignored-messages RET command.

Join our Telegram group

to discuss the development of telega.el.

Submitting issues is

exceptionally helpful.

telega.el is licensed under GNU GPL version 3.

Q: I have this error while installing telega

Cannot open load file: No such file or directory, visual-fill-column

strong: telega.el depends on visual-fill-column package, please

install it first. This package is available from

MELPA

Q: I have this error while running telega

strong: telega.el requires Emacs with SVG and ImageMagick support.

SVG support in Emacs is done using librsvg library. As to

imagemagick, you will need libmagickcore-dev and libmagickwand-dev

packages installed. But unfortunately Emacs recently disabled

imagemagick support by default (see

https://lists.gnu.org/r/emacs-devel/2018-12/msg00036.html). So you

need to compile Emacs by hand, specifying --with-imagemagick flag to

./configure script.

Telega won't depend on imagemagick in future, since required image

features has been added to newer Emacs, see

https://lists.gnu.org/r/emacs-devel/2019-06/msg00242.html

Q: Does telega have proxy support?

A: Yes, use telega-proxies custom variable, for example:

See C-h v telega-proxies RET for full range of proxy types.

Q: Stickers are not shown.

A: Make sure you have imagemagick support and please install webp package

Q: telega.el is unbearable slow.

A: You might be hitting into Emacs bug, described here https://lists.gnu.org/archive/html/bug-gnu-emacs/2020-01/msg00548.html

Also see https://github.com/zevlg/telega.el/issues/161

Q: There are no glyphs for some unicode characters.

A: Please install fonts-symbola package

Q: There is some formatting issues when some unicode characters are used.

A: Yes, partly. If character has full width of multiple ordinary chars you can tweak char-width-table. Add code like this to your init.el:

There is also telega-symbol-widths custom variable, you might want

to modify it.

Q: Is there erc-like chats tracking functionality?

A: Yes, set telega-use-tracking-for to non-nil.

Tracking is done only for opened chats, i.e. chats having

corresponding chat buffer.

Its value is a (Chat Filter)[https://github.com/zevlg/telega.el/blob/master/doc/telega-manual.org#chat-filters].

For example, to enable tracking for chats with enabled notifications or for chats where you have unread mention, use:

Q: Is it possible to use telega in tty-only Emacs (aka

emacs-nox)?

A: Yes, set telega-use-images to nil, before start.

Q: Is it possible to use markup in outgoing messages?

A: Yes, use C-u RET to send message with markup, also see

telega-chat-use-markdown-version. Supported markup:

Note: Language syntax highlighting requires the contrib telega-mnz

module

Q: Is there manual for telega.el?

A: We started to write https://github.com/zevlg/telega.el/blob/master/doc/telega-manual.org

(read more)

Coronavirus: What's happening to free school meals this summer?

Words: - BBC News - 08:38 17-06-2020

The government in England has agreed to extend a voucher scheme for children on free school meals during the summer holidays, following a campaign by footballer Marcus Rashford and others.

It had previously insisted the scheme would finish at the end of the summer term. Scotland and Wales will also continue with the voucher programme.

So, who is eligible for free school meals and how do they work?

Free school meals have been at least partially funded by the government for more than a century, because of concerns about malnourishment and children being too hungry to concentrate during lessons.

Children of all ages living in households on income-related benefits may be eligible, from government-maintained nurseries through to sixth forms.

Eligibility varies slightly between England, Wales, Scotland and Northern Ireland because the nations set their own rules.

New claims made from April 2018 in England must come from households earning a maximum income of £7,400 a year after tax, not including any benefits. It's the same in Scotland and Wales, but in Northern Ireland the household income threshold is £14,000

In England and Scotland, all infant state school pupils (those in Reception and in Years 1 and 2) can get free school meals during term time.

If a child qualifies for school meals they remain eligible until they finish the phase of school they're in as of 31 March 2022, whether primary or secondary.

In England, about 1.3 million children claimed for free school meals in 2019, or about 15% of state-educated pupils.

In Manchester, where Marcus Rashford grew up, the figure is 28.1%

The take-up was greatest in parts of London, the north of England and the Midlands where between a quarter and a third of all pupils were receiving free school meals.

Premier League footballer Marcus Rashford successfully campaigned for school meals vouchers to continue over the summer

The majority of children have not been at school during the coronavirus pandemic. This has prompted concerns that those eligible for free school meals could "fall through the cracks" and go hungry.

In recent years, free school meals have been linked to lowering obesity levels, and boosting academic achievement for poorer pupils.

During term time, the government in England expects schools to support pupils eligible for free school meals through an alternative scheme, such as:

Many families have been issued with either an electronic voucher or gift card worth £15 each week per pupil, to spend at supermarkets including Sainsbury's, Asda, Tesco, Morrisons, Waitrose and M&S.

But the system has suffered problems including schools struggling to log on, parents being unable to download vouchers and some saying the vouchers failed when they tried to use them.

The programme, which has cost more than £129m since lockdown began, also ran throughout the Easter and May half-term holidays.

Calling for the government in England to change its decision, Manchester United and England forward Marcus Rashford said his family had once relied on free school meals. "The system isn't built for families like mine to succeed," he said.

Campaigners had also threatened legal action against the government if it didn't extend the food voucher scheme.

Reversing the decision, Prime Minister Boris Johnson welcomed Mr Rashford's "contribution to the debate around poverty".

A "Covid summer food fund" will now offer six-week food vouchers to children eligible for free school meals in England during the holidays.

(read more)

Open Usage Commons: a warning

Words: Phate6660 - lobste.rs - 17:46 31-07-2020

This General Discussion board is meant for topics that are still relevant to Pale Moon, web browsers, browser tech, and related, but don't have a more fitting board available.Please stick to the relevance of this forum here, which focuses on everything around the Pale Moon project and its user community. "Random" subjects don't belong here, and should be posted in the Off-Topic board.

Forum rules

16 posts• Page of 11

Moonchild

Pale Moon guru

Posts: 27267 Joined: 2011-08-28, 17:27

Location: 58°2'16"N 14°58'31"E

Open Usage Commons: a warning

I was pointed to a blog post made in the Google websphere that talks about the Open Usage Commons, an organization to, allegedly, provide trademark services for Open Source developers so they can protect their brands, identity and above all trademarks. On the surface this sounds great to provide to the Open Source community, but a few things didn't sit right, so I dug a little deeper, visited their website, and specifically their FAQ to get an understanding of what they are (supposed to be) doing for F(L)OSS, its communities and developers, and the various Open Source identities of software products, services and groups.

Among other things in the in general overly-vague FAQ that in half its questions focused on specific usage questions of the 3 pre-approved Google products/services that already have a place there, There was the following sticking point:

To be able to provide the management and support services that are part of the Open Usage Commons, the trademarks that join the Commons will be owned by the Open Usage Commons.

Source: https://openusage.org/faq/#what-is-the-difference-between-owning-source-code-ip-and-owning-a-trademark-who-owns-the-ip-of-the-projects (last sentence)

The FAQ also explained who is involved, and that made me further question things:

The Open Usage Commons consists of a Board of Directors. It will soon have a Legal Committee that advises the board and the projects, as well as advisory members – individuals selected by the projects to guide the trademark usage policies.[...]The board of directors is Allison Randal (Open source developer and researcher), Charles Isbell (Georgia Institute of Technology), Chris DiBona (Google), Cliff Lampe (University of Michigan), Miles Ward (SADA), and Jen Phillips (Google).

Source: https://openusage.org/faq/#who-is-involved-in-the-open-usage-commons

Feel free to check out any of these names, most will likely have a Wikipedia page or similar describing their involvement in management circles and the general area they work in. I find the collection of people most peculiar if the mission really is what the vagueries describe.

So, after finding all that out that, I decided to send them an e-mail with a number of questions that were raised by their website and not answered, outlined below:

As stated in your FAQ, any projects joining the Open Usage Commons will have to sign over property ownership of trademarks to you. This will effectively give you absolute control over the brand and trademark used by the project (effectively its identity). Why would any project owner want to do this?

For many FOSS developers, creating and establishing their identity (and growing their audience as a result) is a slow, organic process that often takes years, so we don't see at all what benefit there would be to basically hand this over to a new organization that has nothing but a promise to found itself on. Can you clarify the benefits for Open Source projects that already have established their identity, brand and trademarks, whom are your target audience?

What exactly are your ties with, and influence by, Google, with how you have 2 Google board members, Google obviously being an advisory member on trademark policies, and an early adopter/preferential position by having the first projects be part of the Open Usage Commons before it's even opened to any(!) other developers?

3a. All things being equal, to what level does Google's influence determine your direction and policies?

Trademarks and branding are primarily a mark of quality assurance to users of Open Source, and the GNU Open Source philosophy wholly agrees with that by supporting that specific rules for the use and redistribution of branded software are perfectly okay, to protect this QA. By not being the project developers, you can, in our opinion, never guarantee the necessary quality assurance on software that carries certain branding, and as such are in no position to determine what is fair use of a trademark or who is in a position to properly use it.

How are you, as a commons organization, going to be able to assure free and fair use of trademarks and branding while not diminishing this QA verification and expectancy whenever specific brands and trademarks are used?

And some less important legal-tech questions:

Trademarks are region-specific, yet your organization seems to be USA-centric. Are you going to provide world-wide coverage of trademark usage protection or will it be solely for the USA?

What legal body do you answer to in case of disputes?

Is there a way for project owners to withdraw from the Open Usage Commons after they have joined, and effectively regain full control over their IP?

All of these are important questions for anyone to know who might consider joining this Commons organization. Receiving a reply from Chris DiBona (who is, in case you aren't aware, the Director of Open Source at Google, overseeing all Open Source activities at the mega corp.) was, to say the least, an utterly disappointing one-liner:

Our website covers many of these quest and the rest will be clear over time. Visit back later to find out more.

Not even touching on any one of these points. I wrote back that it draws the org's trust level in question if these important yet general questions aren't answered, since it's a good question how they expect to ever build the amount of trust required for developers to give the org ownership of their brand, brand identity (and therefore their known names of software) and trademarks if claimed.

The response was yet another one-liner, further flat-out refusing to address the voiced concerns in the questions:

I'm sorry, it's nothing personal but all the questions you have will be answered in time on the website or they won't, regardless were not going to engage with you on these personally, it doesn't scale.

So the non-answer being "it will be answered or it won't" and further clouding the important issues in questions not answered and vagueries without commitment.

So, to summarize:

Google's directors have 2 seats on the board. Actually, no, make that 3. Miles Ward was a Google employee for 5 years in management too (Director of Solutions) until April 2019. Don't think those ties are so easily cut.

The org currently only has a board of directors, and refuses to answer questions that should already be known by setting up the organization and launching it. This refusal doesn't feel on the level, and does not instil the needed trust for anyone to sign over their intellectual property and software identity.

Some of Google's projects are already established members ahead of everyone else, whom will have assigned "advisory members" for the trademark policies the org should maintain -- basically pre-approved VIPs which indicates quite clearly that this is a Google venture above all.

The org wants to have full ownership of your brand and trademarks.

The org can't reasonably be expected to be an arbiter on "free and fair use" of trademarks when those trademarks are tied directly to quality-of-work.

There is no information on legal precedent, legal governing law, or how the org is funded and by whom.

Which brings me to my warning: if you are an Open Source developer and own/lead a project, and have an established name, brand, logo or trademark for it, please think twice about joining this "commons" that will become the owner of your work's identity; the part of the Open Source software that makes your project yours. Don't think you are too small of a developer to be interesting; and don't think you need a lawyer to own a trademark (all that takes is doing some research on previous uses of your brand name, and if not in conflict, simply publicly making a claim with intent to actively use the trademark (TM) and defending it if someone tries to steal it from you). Don't give away your IP, certainly not to the likes of an org that seems very heavily Google-influenced and Google-sourced! As far as I can tell they will be able to, and likely will, hold your trademark and brand hostage.

How does that scale for you, Chris?

"There will be times when the position you advocate, no matter how well framed and supported, will not be accepted by the public simply because you are who you are." -- Merrill Rose

i


Baloo

Moonbather

Posts: 66 Joined: 2017-08-24, 15:02

Re: Open Usage Commons: a warning

Google is trying to own open source. Their blatant attempt to steal trademarks is just another step to web standard domination as they use open source to undercut their competitors ability to make any money at all.

i


Tharthan

Off-Topic Sheriff

Posts: 689 Joined: 2019-05-20, 20:07

Location: New England

Re: Open Usage Commons: a warning

Don't give away your IP, certainly not to the likes of an org that seems very heavily Google-influenced and Google-sourced! As far as I can tell they will be able to, and likely will, hold your trademark and brand hostage.

Do you actually think that people will listen, especially those outside of the GNU-type camp or people affiliated with UXP projects?

I'm sceptical.

Google is trying to own open source.

That much is obvious.

Their blatant attempt to steal trademarks is just another step to web standard domination as they use open source to undercut their competitors ability to make any money at all.

What bothers me, frankly, is that they have such gall. No other company would or could get away with trying to pull this.

"This is a war against individuality and intelligence. Only thing we can do is stand strong." — adesh, 9 January 2020

i


Moonchild

Pale Moon guru

Posts: 27267 Joined: 2011-08-28, 17:27

Location: 58°2'16"N 14°58'31"E

Re: Open Usage Commons: a warning

Do you actually think that people will listen, especially those outside of the GNU-type camp or people affiliated with UXP projects?

They might, if enough people know about it.

"There will be times when the position you advocate, no matter how well framed and supported, will not be accepted by the public simply because you are who you are." -- Merrill Rose

i


New Tobin Paradigm

Knows the dark side

Posts: 7492 Joined: 2012-10-09, 19:37

Location: Just beyond the Lament Configuration

Re: Open Usage Commons: a warning

That isn't true but Google could succeed in it if no one challenged it.

- Welcome to the worst nightmare of all... reality! -

i


Moonraker

Board Warrior

Posts: 1434 Joined: 2015-09-30, 23:02

Location: uk.

Re: Open Usage Commons: a warning

In a nutshell.If you join this organisation then everything you have created and worked on and all related trademarks etc are to be handed to google on a silver platter and YOU as developer have to answer to google chiefs...what a shit show,they truly are trying to gollop up everything and it seems open source is not even safe from the goliath..good grief i do hope this post gets spread quite rapidly and spread the news.Just out of curiousity moonchild and not to appear rude but were you by any chance stroking your chin and considering joining...?

Xenial puppy linux 32-bit.Pale moon 28.9.3

i


vannilla

Board Warrior

Posts: 1052 Joined: 2018-05-05, 13:29

Re: Open Usage Commons: a warning

The U.S.A.-centric probability is pretty fearsome if you ask me, more than other concerns.Despite being called the land of freedom, there are some restrictive rules over there which do not exists in other countries (not just third-world places), so by being part of this thing a non-american developer will very likely be subjected to laws he/she would otherwise not be, and that's pretty scary.

i


Tharthan

Off-Topic Sheriff

Posts: 689 Joined: 2019-05-20, 20:07

Location: New England

Re: Open Usage Commons: a warning

Despite being called the land of freedom, there are some restrictive rules over there which do not exists in other countries (not just third-world places)

Freedom ≠ licenceDifferent things, vannilla. Having the licence to do whatever on Earth you want is not what the U.S. is about.Granted, some of the copyright laws here, especially relating to digital stuff, may go a bit overboard/have loopholes that content owners can manipulate. I've been concerned about that for years. Off-topic:

"This is a war against individuality and intelligence. Only thing we can do is stand strong." — adesh, 9 January 2020

i


Potkeny

Moonbather

Posts: 51 Joined: 2018-08-03, 17:00

Re: Open Usage Commons: a warning

To be able to provide the management and support services that are part of the Open Usage Commons, the trademarks that join the Commons will be owned by the Open Usage Commons.For most end users that are just consuming the source code, this doesn’t directly, immediately change their experience.Who it does impact are the companies that want to offer managed versions of these projects, or who have the project as part of their service and want to use the project brand to demonstrate quality/innovation/etc. Applying OSS principles and neutral ownership of the trademark means that these companies can invest in offering “Project as a Service” because it’s a guarantee that they can use that mark; it won’t be suddenly taken away on a whim after they’ve built up an offering around it.

I might be too tired to understand it correctly, but are they really saying the benefit is not for "the project", but for those companies who make money by depending on "the project"?

My pessimist self says google wants to bath in the "we're supporting open-source projects!" praises while controlling (if not owning) the trademarks of those "open-source" projects..

i


RoestVrijStaal

Moonbather

Posts: 57 Joined: 2019-06-19, 19:18

Location: Dependency Hell

Re: Open Usage Commons: a warning

That board of OpenUsage is a joke. 50% of its members have close ties with Google. I won't be surprized if the rest will have them as well, but are omitted from the persons' description.

Also about the ownership thing: the FSF advises to users of their licenses to do a similar thing as well, minus the transfer of ownership of trademarks and branding.

i


adesh

Board Warrior

Posts: 1103 Joined: 2017-06-06, 07:38

Re: Open Usage Commons: a warning

I think they'll be able to engulf half of the projects easily. Many of the "internet software" already make use of Google libs and APIs. Also, lazy developers won't read as much as Moonchild did.

I'd say this can happen for the same reason we have reached where we are.

The U.S.A.-centric probability is pretty fearsome if you ask me, more than other concerns.

Your Europe is best. If I leave my country, I'll join a good country in Europe. Off-topic:

i


Tharthan

Off-Topic Sheriff

Posts: 689 Joined: 2019-05-20, 20:07

Location: New England

Re: Open Usage Commons: a warning

Also about the ownership thing: the FSF advises to users of their licenses to do a similar thing as well, minus the transfer of ownership of trademarks and branding.

I think that that is quite a different kettle of fish.

GNU and the FSF are pretty up front about their ideology, and you can either take it or leave it. If you contribute to their cause, you almost certainly know what you are getting into.

For the average person, Google is totally different. And Google doesn't exactly advertise as being a radical organisation wishing to impose their products and control upon everyone.

"This is a war against individuality and intelligence. Only thing we can do is stand strong." — adesh, 9 January 2020

i


adesh

Board Warrior

Posts: 1103 Joined: 2017-06-06, 07:38

Re: Open Usage Commons: a warning

Google doesn't exactly advertise as being a radical organisation wishing to impose their products and control upon everyone.

And yet it does!

i


Moonchild

Pale Moon guru

Posts: 27267 Joined: 2011-08-28, 17:27

Location: 58°2'16"N 14°58'31"E

Re: Open Usage Commons: a warning

Also, lazy developers won't read as much as Moonchild did.

Why do you think I posted a warning about it? Because devs who don't look into it as much or have trouble cutting through the woolly wording on the website might fall into the trap and lose ownership of their work.

I might be too tired to understand it correctly, but are they really saying the benefit is not for "the project", but for those companies who make money by depending on "the project"?

Yes and no. They are saying they benefit both. What they propose is to take the "burden" of trademark management out of developers' hands and become arbiter of what is "free and fair usage" of the brands and trademarks (which, IMO they can't do, see my question 4 in the OP). This benefits companies who might otherwise have to be concerned about using FOSS logos in their own projects, etc., as if that kind of use isn't normally allowed under fair use. In fact, as long as you mention that the trademarks belong to their respective owners when you use them, i.e. credit the owners of the marks, there should never be an issue using the marks to signify dependence on third party software unless it's explicitly forbidden.

So, for companies it's about getting a "guarantee" that the branding can be used freely and perpetually. It's being marketed as a convenience for developers, to "let the Commons worry about all that pesky trademark stuff"; but seriously, if you have already established yourself with a brand and trademark and known name, you really don't need them to manage that for you; not like this can't easily be done by the developers themselves because all it takes is publishing a statement on the use of branding and trademarks, letting third parties know what you want to allow and not allow. Fair use is otherwise a pretty well-defined term for trademarks and logos!

And they certainly don't need to own your IP to do such a thing.

"There will be times when the position you advocate, no matter how well framed and supported, will not be accepted by the public simply because you are who you are." -- Merrill Rose

i


Moonchild

Pale Moon guru

Posts: 27267 Joined: 2011-08-28, 17:27

Location: 58°2'16"N 14°58'31"E

Re: Open Usage Commons: a warning

Also about the ownership thing: the FSF advises to users of their licenses to do a similar thing as well, minus the transfer of ownership of trademarks and branding.

I never really agreed with that, myself, but when it comes to p , and the free adaptability and sharing of it, in a certain point of view (some would say the communist approach to developing) an argument can be made for that. Even so that's entirely up to the devs if they want to make their code not just open licensed but pretty much public domain with the copyright only there so commercial entities can't snag it up. That commitment goes both ways though, since it also prevents you from using the code in your own private sphere in the future, if you want to have something proprietary based on what you wrote. But code management is not at all what this is about. This is about your trademarks, your brand, your software and your software identity; that which sets your efforts apart from a publicly licensed pool of code, that which you are proud to underwrite with your name and share. The FSF will never try to take that away from you.

"There will be times when the position you advocate, no matter how well framed and supported, will not be accepted by the public simply because you are who you are." -- Merrill Rose

i


Potkeny

Moonbather

Posts: 51 Joined: 2018-08-03, 17:00

Re: Open Usage Commons: a warning

I see, thank you, "the "burden" of trademark management" did slipped my mind as a possible benefit.

i


16 posts• Page of 11

i p

(read more)

Coronavirus: When I come back from Spain, will I get paid if I self-isolate?

Words: - BBC News - 19:11 27-07-2020

The government's decision to impose a 14-day quarantine on travellers arriving in the UK from Spain has caused "uncertainty and confusion", as one holiday firm has put it.

So, what does it mean for those visiting Spain?

It will depend on individual employers.

You're not automatically entitled to statutory sick pay if you are self-isolating after returning from holiday or business travel, industrial relations body Acas says.

You should tell your boss as soon as possible and ask them about the company's policy.

If you can work from home, for instance, then you can be paid as normal. But if you can't, another solution could be to take annual leave so you can get holiday pay while self-isolating.

Your employer could choose to pay you sick pay, either at the statutory rate or a higher level.

However, if you have coronavirus or its symptoms and have to self-isolate, then you are eligible for statutory sick pay, which pays £95.85 per week.

A Downing Street spokesperson said: "If there are people who need urgent support then they may be entitled to the new-style employment support allowance or universal credit."

The Association of British Insurers says existing insurance is likely to cover holidaymakers who were in Spain when the government's advice changed.

But it added travelling against Foreign Office advice is likely to invalidate travel insurance.

Under the regulations, if an online travel agent makes any significant change to your holiday - such as flight times or the hotel - they should tell you and give you a reasonable period of time to accept it, or cancel with a full refund.

Additionally, the regulations allow you to claim a refund for any trip to a destination with a Foreign and Commonwealth Office warning against it, such as Spain currently.

For refunds on other booked holidays, consult your airline, tour operator or travel agent.

Tui customers due to travel anywhere in Spain between 27 July and 9 August can cancel or amend holidays and receive a full refund or the option to rebook. People with holidays booked from 10 August will be updated on 31 July.

Jet2 has cancelled flights to the Balearic Islands and Canary Islands on Tuesday 28 July, as well as mainland Spain. It advised customers not to travel to the airport.

Other firms such as British Airways and EasyJet are maintaining their flight schedules.

People whose trips are cancelled should get a refund within two weeks, but there may be a delay.

When you arrive back in the UK, you must go straight home or to other suitable accommodation. You are allowed to travel by public transport.

Your 14-day period of self-isolation starts from the day after you arrive.

You cannot leave home except for medical assistance, to attend court or go to a funeral - or to go shopping for essentials, if no-one else can do this for you.

Leaving home for work, exercise or socialising is not allowed.

In England, there are some other reasons you can leave your accommodation.

People can be fined up to £1,000 for breaking quarantine rules.

However, the National Police Chiefs' Council says that as of 20 July, only one person has been fined for not self-isolating after arriving in England.

It is difficult to say, as the government is reviewing nations on its "safe level" for travel every week.

Which? magazine travel editor Rory Boland told BBC News that in this instance an update may come sooner, as travel "is so important to UK travellers and to the economy of Spain".

What questions do you have about Spain travel regulations?

strong terms & conditions strong privacy policy.

Use this form to ask your question:

(read more)

SETL - The SET Programming Language (2010)

Words: hwayne - lobste.rs - 17:23 31-07-2020

The last weeks I have played with the programming language (anguage). I like learning these kind of "paradigmatic" programming language even if they are not very much in use anymore. There is almost always new things to learn from them, or they make one to see well known things in a new light.

SETL was created in the late 1960's and is to be considered one early very high level language (VHLL) using sets as the bearing principle (like mathematical formulation) together with a PASCAL-like syntax. Some trivia:

For more about the history of SETL, see in David Bacon's dissertation "SETL for Internet Data Processing". David Bacon is the person behind GNU SETL. In the section Bacon compares SETL with some other languages (Perl, Icon, Functional Languages, Python, Rexx, and Java).

I like SETL, much for its handling of sets and tuples (arrays) which make prototyping of some kinds of problem easy, especially those with a mathematical bent. However, the advantages SETL once had as been a VHLL, prior to the "agile" languages - e.g. Perl, Python, Ruby, Haskell, etc - is not so big anymore. (I should probably mention that I'm at least acquainted with these mentioned languages..)

In case I forgot it: See my with links and my SETL programs (and maybe some not mentioned here).

There are some the different versions (or off springs) of SETL:

I will not go through all features of SETL here, just show some example of what I have done and like about the language. See (at settheory.com) for an in-depth tutorial of the language (SETL2 but much is also applied to SETL), or Robert B. K. Dewar's (PDF) for an overview, or .

All the examples below works with GNU SETL. Many of the smaller examples is shown as a command one-liner, since I often test different features this way. And as you may notice, quite a few of the examples are not very unlike programs written in Python or Haskell.

The mandatory program:

primes2 := {p in {2..10000} | forall i in {2..fix(sqrt(p))} | p mod i /= 0}; print(primes2);

One feature I like (and use a lot) is test things from the command line:

$ setl 'time0:=time();primes:= {p in {2..100000} | forall i in {2..fix(sqrt(p))} | p mod i /= 0}; print("Num primes:",#primes);print("It took", (time()-time0)/1000,"seconds");'Num primes: 9592It took 2.222 seconds

A variant of using instead of :

$ setl 'print({n in {2..100} | (not (exists m in{2..n - 1} | n mod m = 0))}); '{2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97}

Still another variant using intersection of and the compound numbers:

$ setl 'n := 150; print({2..n} - {x : x in {2..n} | exists y in {2..fix(sqrt(x))} | x mod y = 0});'{2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97 101 103 107 109 113 127 131 137 139 149}

Here are some other examples of set/array comprehensions.

As a one liner:

$ setl 'f:= [1,1]; r := [f(i) := f(i-1)+f(i-2) : i in [3..10]]; print(f);'

as a "one-liner" (not very fast for say [1..300]).

$ setl 'print({[a, b, h]: b in {1..30}, a in {1..b - 1} | (exists h in {2..a + b} | (a*a + b*b = h*h)) and (not (exists d in {2..b - 1} | ((b mod d) = 0 and (a mod d) = 0)))}); '{[3 4 5] [5 12 13] [7 24 25] [8 15 17] [20 21 29]}

Creation of a (all subsets of a set), with the intermediate values printed;

$ setl 'a := {1,2,3}; p := { {}}; (for x in A, y in P) p with:= Y with x; print(p); end; print(p);'{{} {1}}{{} {1} {2}}{{} {1} {2} {1 2}}{{} {1} {2} {3} {1 2}}{{} {1} {2} {3} {1 2} {1 3}}{{} {1} {2} {3} {1 2} {1 3} {2 3}}{{} {1} {2} {3} {1 2} {1 3} {2 3} {1 2 3}}{{} {1} {2} {3} {1 2} {1 3} {2 3} {1 2 3}}

values from a tuple to a map (hash table).

A map is represented as a set of tuples of .

First a slow solution:

a := [1,1,2,2,3,3,3,4,4,4,4];m:={ [i, #[j : j in [1..#a] | a(j) = i ]] : i in { i : i in a}};

Then a faster version:

$ setl 'a := [1,1,2,2,3,3,3,4,4,4,4]; m:= {}; for i in a loop m(i) +:= 1; end loop; print(m);'{[1 2] [2 2] [3 3] [4 4]}

The construct in a map (hash table) loop gives both the index (i) and the value (x). Here we also see how to represent ranges with increment other than 1 (much like Haskell).

setl 's := {[i,i**2] : i in [1,3..15]}; for x = s(i) loop print(i,x); end loop;'1 13 95 257 499 8111 12113 16915 225

()

SETL has a special syntax for multi-maps, i.e. where a key has more than one values: use "{}" instead of using the parenthesis "()" for accessing. Here the key has two values ( and ). Using a "single-map" access ( gives , the special undefined value (represented as "*" i GNU SETL).

setl 'a := {[1,["a"]], [2, ["b"]], [1, ["c"]]}; print(a);print(a(2));print(a(1));print(a{1});'{[1 [a]] [1 [c]] [2 [b]]}[b]*{[a] [c]}

With a compound operators (as or ) makes it possible to write quite sparse code (somewhat akin to APL and ). Here is the factorial of 100, also showing the support for arbitrary precision.

There is no built-in for tuples. Instead we use the compound operator version, which is possible since is a binary operator:

$ setl 'setrandom(0); print(/[random(10000) : i in [1..100]]);'9898

Another example of compound operators is from Project Euler problem #5 (). In my solution () lcm and gcd is defined as operators (in contrast to procedures):

print(lcm/[2..20]); -- Prints the answer.op lcm(a,b); g := a gcd b; return (a*b) div g;end op lcm;op gcd(u, v); return if v = 0 then abs u else v gcd u mod v end;end op;

Speaking of Project Euler problems, here is the SETL program for the first problem ():

print(+/{i : i in [1..999] | i mod 3 = 0 or i mod 5 = 0});

In , three different version of mean are defined (as procedures) using compound operators (maybe not the most efficient way).

-- arithmetic meanproc mean_A(x); return +/x/#x; end proc;-- geometric meanproc mean_G(x); return (*/x)**(1/#x);end proc;-- harmonic meanproc mean_H(x); return #x/+/[1/i:i in x];end proc;

The is for creating random variables starting with an "arbitrary" seed.

setl 'setrandom(0); s := [1,3,5,8]; print([random(s) : i in [1..10]]);'[5 1 8 8 5 3 3 1 5 3]

With a set we get a value only once:

$ setl 's1 := {1..10}; setrandom(0); print({ random(s1) : i in [1..10]});'{3 5 6 7 8}

In GNU SETL the of the set is always presented as sorted, but this is not a requirement in the SETL language.

GNU SETL has built in support for regular expressions (which standard SETL has not). Some examples:

$ setl 's:="nonabstractedness"; m:=s("a.*b.*c*.d*.e*"); print(s);print(m);'nonabstractednessabstractedness

Also see that search for words like this in a word file.

Substitution (cf. for global substitution):

$ setl 's:="nonabstractedness"; m:=sub(s,"a.*b.*c*.d*.e*",""); print(s);print(m);'nonabstractedness

Note that GNU SETL don't support non-greedy regular expressions (i.e. the ".+?" constructs from Perl etc), so the plain old construct must be used.:

$ setl 's:="nonabstractedness"; m:=s("a[^s]+s"); print(s);print(m);'nonabstractednessabs

A small drawback is that GNU SETL don't have support for national characters in strings. The only acceptable characters are the "plain ASCII".

SETL also has SNOBOL/SPITBOL like patterns (but not as nicely integrated as in SNOBOL). Except as experiments, I tend to use regular expression rather than these functions.

Example: is used like this:

$ setl 'x := "12345 12345 12345"; print(any(x, "123"));print(x);'12345 12345 12345

However, I miss the function which takes characters from the beginning, not just the first and it is quite easy to create it. First let's see how it works, where we will take all the characters from the beginning of the string if they are any of "123":

$ setl 'x := "12345 12345 12345"; print(x); while any(x, "123") /= "" loop print(x); end;;print(x);'12345 12345 123452345 12345 12345345 12345 1234545 12345 1234545 12345 12345

(The corresponding regular expression for this is, of course .)

A SETL procedure for is defined below. The first argument is defined as read-write () so we can modify the string . The value returned () contains all the matched characters.

proc many(rw s,p); z := ""; while (zz := any(s,p)) /= "" and zz /= "" loop z +:= zz; end loop; return z; end proc;'

And here is in action. Note: procedures must always be placed last in a program.

x := "12345 12345 12345";print(x);z:=many(x, "123");print("x",x);print("z",z);proc many(rw s,p); print(s); print(p); while (zz := any(s,p)) /= "" and zz /= "" loop z +:= zz; end loop; return z;end proc;

Result:

12345 12345 1234512345 12345 12345123x 45 12345 12345z 123

In is used, as well as a direct approach and one using regular expression.

(Shell) filters:

GNU SETL has a lot of extensions for system (UNIX) handling, e.g. filter.

$ setl 'f := filter("ls p*.setl"); print(f);s := split(f,"\n");print([s,#s]);'perm.setlpointer.setlprimes2.setlprimes3.setlprimes.setlprintprimes.setl[['perm.setl' 'pointer.setl' 'primes2.setl' 'primes3.setl' 'primes.setl' 'printprimes.setl' ''] 7]

Reading a file directly is by . Note .

This is rather slow since it has to loop through a lot of values. However it don't loop through all permutations since for each variables we exclude values of the previous stated variables.

print(send_more_money1());proc send_more_money1; ss := {0..9}; smm := [[S,E,N,D,M,O,R,Y] : -- ensure that all numbers are different S in ss , E in ss - {S} , N in ss - {S,E} , D in ss - {S,E,N} , M in ss - {S,E,N,D} , O in ss - {S,E,N,D,M} , R in ss - {S,E,N,D,M,O} , Y in ss - {S,E,N,D,M,O,R} | S > 0 and M > 0 and (S * 1000 + E * 100 + N * 10 + D) + (M * 1000 + O * 100 + R * 10 + E) = (M * 10000 + O * 1000 + N * 100 + E * 10 + Y )]; return smm;end proc;

For some other (and slower) variants, see .

A rather fast version of calculating the prime factors of a number. Note that in GNU SETL division () returns a real number, whereas in SETL2 returns an integer. So here we use instead of

(read more)

Coronavirus: False and misleading claims about vaccines debunked

Words: - BBC News - 23:16 25-07-2020

In the week that Oxford University announced promising results from its coronavirus vaccine trial, we're looking at claims on social media about vaccines and misleading statements about their safety.

The anti-vaccination movement has gained traction online in recent years, and campaigners opposed to vaccination have moved their focus to making claims relating to the coronavirus.

First, a video containing inaccurate claims about coronavirus vaccine trials, made by osteopath Carrie Madej, that has proved popular on social media.

Carrie Madej's video makes a false claim that the vaccines will change recipients' DNA (which carries genetic information).

"The Covid-19 vaccines are designed to make us into genetically modified organisms."

She also claims - without any evidence - that vaccines will "hook us all up to an artificial intelligence interface".

There are 25 different candidate vaccines in clinical trials around the world according to the World Health Organization (WHO), but none of them will alter human DNA and they do not contain technology to link people up to an artificial intelligence interface.

The vaccines are all designed to provoke an immune response by training our bodies to recognise and fight the virus.

Carrie Madej makes a number of other false claims, including that vaccine trials are "not following any sound scientific protocol to make sure this is safe".

"New vaccines undergo rigorous safety checks before they can be recommended for widespread use," says Michelle Roberts, BBC online health editor.

We have asked Carrie Madej for comment about these claims, but have received no response at the time of publication.

It was first uploaded to YouTube in June, where it clocked more than 300,000 views, but it has also been popular on Facebook and Instagram.

It's still circulating in the United States, the UK and elsewhere.

There was a small protest in South Africa a week after a Covid-19 vaccine trial started in Johannesburg

A scientist in South Africa, Sarah Downs, who writes under the alias Mistress of Science, said she was alerted to the video by her mother whose prayer group had shared it.

The scientist sent her own debunking information to this group and says: "They are now much better informed, which I'm so glad about, because they were all taken in by that video."

When the preliminary results of the Oxford coronavirus vaccine study were published on Monday, the topic provoked much debate in coronavirus-focused Facebook groups.

Some Facebook users posted comments saying they didn't want the vaccine as they felt they would be used as "guinea pigs" and that it had been "rushed into production at warp speed".

While there might be concerns about safety given the accelerated pace of development, Prof Andrew Pollard, head of the Oxford Vaccine Group, told the BBC the rigorous safety processes included in all clinical trials were in place.

This includes safety reports to regulators in the countries taking part.

The trial has been so fast in concluding the first two phases because of the head start provided by previous work on coronavirus vaccines in Oxford, the acceleration of administrative and funding processes, and the huge interest in the trial which meant no time was spent searching for volunteers.

As the trial moves to its third phase, with thousands more volunteers taking part, all the participants will be monitored for side-effects. There were no dangerous side-effects from taking the vaccine in the first two phases, though 16-18% of trial participants given the vaccine reported a fever. Researchers said side-effects could be managed with paracetamol.

When the Oxford vaccine trial first started, there was a claim that the first volunteer had died.

The story was quickly debunked by fact-checkers and the BBC's medical correspondent, Fergus Walsh, conducted an interview with the volunteer.

A meme circulating on social media claims vaccines were responsible for 50 million deaths during the Spanish flu pandemic in 1918.

But this is completely wrong.

Firstly, as the US Centers for Disease Control states, there was no vaccine at the time.

Scientists in Britain and the US did experiment with basic bacterial vaccines, but these were not vaccines as we would recognise them today, says historian and author Mark Honingsbaum.

This was "for the good reason that no-one knew that the influenza was a virus".

There were two main causes of death - the initial flu infection or from the strong enormous immune response the virus triggered leading to lungs being filled with fluids.

Additional reporting by Olga Robinson, Shayan Sardarizadeh and Peter Mwai.

Read more from Reality Check

Send us your questions

Follow us on Twitter

(read more)

CurveBoards: Integrating Breadboards into Physical Objects

Words: napkindrawing - lobste.rs - 14:35 31-07-2020


Junyi Zhu, Lotta-Gili Blumberg, Yunyi Zhu, Martin Nisser, Ethan Levi Carlson, Xin Wen, Kevin Shum, Jessica Ayeley Quaye, Stefanie Mueller.

CurveBoards: Integrating Breadboards into Physical Objects to Prototype Function in the Context of Form

In Proceedings of

CHI ’20.

DOI

PDF

Video

Slides

Video

Press

MIT News

UW ECE Spotlights

ACM TechNews

Inverse

Electronics Weekly

ScienceDaily

EENews Europe

Hackster.io

All About Circuits

Slides


2 / 92

3 / 92

4 / 92

5 / 92

6 / 92

7 / 92

8 / 92

9 / 92

10 / 92

11 / 92

12 / 92

13 / 92

14 / 92

15 / 92

16 / 92

17 / 92

18 / 92

19 / 92

20 / 92

21 / 92

22 / 92

23 / 92

24 / 92

25 / 92

26 / 92

27 / 92

28 / 92

29 / 92

30 / 92

31 / 92

32 / 92

33 / 92

34 / 92

35 / 92

36 / 92

37 / 92

38 / 92

39 / 92

40 / 92

41 / 92

42 / 92

43 / 92

44 / 92

45 / 92

46 / 92

47 / 92

48 / 92

49 / 92

50 / 92

51 / 92

52 / 92

53 / 92

54 / 92

55 / 92

56 / 92

57 / 92

58 / 92

59 / 92

60 / 92

61 / 92

62 / 92

63 / 92

64 / 92

65 / 92

66 / 92

67 / 92

68 / 92

69 / 92

70 / 92

71 / 92

72 / 92

73 / 92

74 / 92

75 / 92

76 / 92

77 / 92

78 / 92

79 / 92

80 / 92

81 / 92

82 / 92

83 / 92

84 / 92

85 / 92

86 / 92

87 / 92

88 / 92

89 / 92

90 / 92

91 / 92

92 / 92



CurveBoards: Integrating Breadboards into Physical Objects to Prototype Function in the Context of Form

Figure 1. (a) CurveBoards are 3D breadboards directly integrated into the surface of physical objects. (b) CurveBoards offer both the high circuit fluidity of breadboards, while maintaining the look and feel of prototypes.

CurveBoards are breadboards integrated into physical objects. In contrast to traditional breadboards, CurveBoards better preserve the object’s look and feel while maintaining high circuit fluidity, which enables designers to exchange and reposition components during design iteration.

Since CurveBoards are fully functional, i.e., the screens are displaying content and the buttons take user input, designers can test interactive scenarios and log interaction data on the physical prototype while still being able to make changes to the component layout and circuit design as needed.

We present an interactive editor that enables users to convert 3D models into CurveBoards and discuss our fabrication technique for making CurveBoard prototypes. We also provide a technical evaluation of CurveBoard’s conductivity and durability and summarize informal user feedback.

Introduction

Breadboards are widely used in early-stage circuit prototyping since they enable users to rapidly try out different components and to change the connections between them [23].

While breadboards offer great support for circuit construction, they are difficult to use when circuits have to be tested on a physical prototype. Since breadboards are box-like shapes, they distort the look and feel of the prototype when attached onto it and can interfere with user interaction during testing. In addition, they limit where electronic components can be placed on the prototype since the area for circuit construction is limited to the size of the breadboard.

One workflow to better preserve the look and feel of the prototype is to solder components onto a protoboard or to fabricate a PCB. However, this requires designers to give up circuit fluidity since all components are fixed in place. Trying out different components and changing connections between them can no longer be done without additional soldering. Alternative methods, such as taping the components onto the prototype, offer more flexibility; however, they make it difficult for designers to exchange and rewire parts and do not offer the same circuit building support as breadboards.

In this paper, we present a new electronic prototyping technique called CurveBoard that embeds the structure of abreadboard into the surface of a physical prototype (Figure 1). In contrast to traditional breadboards, CurveBoards better preserve the object’s look and feel while maintaining high circuit fluidity, which enables designers to exchange and reposition components during design iteration.

Since CurveBoards are fully functional, i.e., the screens are displaying content and the buttons take user input, designers can user test interactive scenarios and log interaction data on the physical prototype while still being able to make changes to the component layout and circuit design as needed.

CurveBoards are not thought to replace existing techniques, such as breadboards or PCBs, but rather provide an additional prototyping technique for early stage interactive device experimentation. CurveBoards work particularly well during mid-fidelity prototyping when designers have already tested basic electronic functionality and in a next step want to work on the interaction design, i.e. integrate electronic parts within the context of a prototype form as part of interactive design practice [26].

In summary, we contribute:

a new electronic prototyping technique for early stage interactive device experimentation called CurveBoard

a demonstration of its applicability across different application scenarios & object geometries at the example of five interactive prototypes

an interactive editor for converting 3D models into CurveBoards including different options for the channel layout

a fabrication method for CurveBoards that uses 3D printing for the housing and conductive silicone for channels

a technical evaluation of conductivity & durability

an informal user evaluation with six users who used CurveBoard to build interactive prototypes

an algorithm for automatic pinhole and channel generation given the specific curvature of a 3D model

a discussion of extensions of our approach, including the use of CurveBoard templates and flexible electronics

In the remainder of the paper, we will first review the related work on electronic prototyping tools and then discuss each of the contributions listed above in order.

CURVEBOARDS

The main benefit of CurveBoards is that they allow designers to iterate on the interaction design of a prototype directly in the context of its physical shape. Using CurveBoards, designers can quickly exchange and reposition components on the prototype’s surface. Once rewired, the prototype is fully functional, i.e. screens on a CurveBoard display content and buttons take user input.

By enabling designers to prototype electronic circuits directly on a physical prototype, CurveBoards are particularly suitable for: (1) finding ergonomic and efficient component layouts, (2) ensuring that the components fit onto the physical prototype, (3) preserving a prototype’s look and feel while iterating on a visual design, (4) preserving an object’s intended function while testing a circuit, and (5) identifying component needs based on the prototype’s form factor.

In the next section, we illustrate each of these use cases at the example of prototyping an interactive device.

#1 Finding Efficient and Ergonomic Component Layouts

For interactive devices, the placement of I/O components plays an important role in achieving efficient and ergonomic interaction. The design of wearables is particularly challenging since I/O components have to be placed with respect to the user’s body and the way the wearable device is worn.

Figure 2 shows this at the example of headphones with built-in speakers and music streaming capabilities, for which we explore the placement and type of menu controls to optimize for user’s arm reach and ability to distinguish controls.

Our initial version of the prototype had the volume controls on the side of the user’s dominant hand and the playlist controls on the non-dominant hand. After analyzing the logged interaction data recorded on the micro-controller over the course of a day, we realize that the playlist controls are being used more often than the volume controls. We therefore move the playlist controls to the side of the dominant hand.

In addition, users informed us that it was difficult to distinguish between the volume and channel controls since they both used press input with similar button sizes. To avoid wrong user input, we replaced the volume buttons with a dial.

Figure 2. Finding efficient and ergonomic I/O layouts for a pair of headphones.

#2 Ensuring that Components Fit onto the Prototype

When prototyping on a traditional breadboard, it is difficult for designers to estimate if all components will later fit onto the physical prototype. A physical design, especially when small or filigree, can limit which types of interactive components can be used and where they can be placed, which is an important part of interaction design.

Figure 3 shows this at the example of an interactive bracelet with display and button for menu control, a photoresistor and LED for heart rate monitoring, and an IMU for step counting. While prototyping directly on the bracelet, we notice that the large display that we initially selected does not fit on the slender bridges of bracelet, we thus exchange it for two smaller ones. After testing different button sizes on the bracelet prototype ranging from 2.3mm to 16mm, we find that the 7mm button fits best while providing the largest interactive space among other different options. We next add the LED and photoresistor and make sure they can be positioned together and offer enough space for users to place their finger. Finally, we exchange the wider IMU with a slimmer one, and replace the larger microcontrollers with two smaller ones.

While some components, like micro-controllers, can be made more compact in later design iterations via custom PCBs, user-facing I/O components (buttons, displays) will be similar in size since they relate to user’s physical characteristics, such as finger sizes and what the human eye can see.

Figure 3. Upon the available space on the prototype, we iterate designs with the exchange of large OLED display with two smaller ones, and add selected push button & DIP LED.

#3 Preserving “Look” and “Feel”

When prototyping visual designs, such as a piece of interactive jewelry, it is difficult for designers to get a sense of the overall look and feel when using a breadboard that significantly distorts the prototype’s shape. CurveBoards, in contrast, allow designers to integrate the components directly on the prototype’s surface, which better preserves the shape..

Figure 4. The look and feel of this interactive ring is preserved since the pinholes are directly integrated into its geometry.

Figure 4 shows this at the example of an interactive ring for which we iterate on the placement of LEDs. Since in our CurveBoard the pinholes form a part of the object geometry, no additional space is needed for them. We try different LED arrangements and based on how the ring looks on our hand, we decide to use multiple LEDs in a row rather than side by side. Note that while CurveBoard better preserved the shape, the pinholes are still visible and impact overall aesthetics.

#4 Preserving an Object’s Intended Function

Traditional breadboards add volume to an interactive object, which can hinder its intended function. For instance, a frisbee may not fly anymore since its shape is no longer aerodynamic, a ring may no longer fit on a user’s hand, and a teapot may not be able to hold the same amount of liquid inside.

Figure 5 shows this in more detail with the example of an interactive frisbee that displays a light pattern when thrown. Prototyping this frisbee with a breadboard attached to its surface would make the frisbee dysfunctional, i.e. it would not fly anymore. CurveBoard, in contrast, preserves the shape of the frisbee and thus its function. Our frisbee design initially contained a microcontroller and an IMU for sensing when the frisbee is in motion. We then iterated on different LED layouts until we settled on the one shown in Figure 5b.

Figure 5. CurveBoard keeps this frisbee functional, which allows us to test the appearance of different light patterns: (a) circular design, and (b) POV “CHI” design when thrown.

#5 Identifying Component Needs

Brainstorming interactive functionality without considering the physical form can be challenging for a designer, who is left with an abstract circuit on a breadboard. The direct, hands-on interaction with a prototype, in contrast, supports exploration, which enhances creativity and can help designers to envision interactive workflows and to identify the components needed to realize them [28]. In addition, since certain object geometries can affect component placement (e.g., high curvature may prevent access to movable elements like knobs or sliders [12]), brainstorming in the context of shape also allows to take such challenges into account.

Figure 6 shows this via the Utah teapot example for which we want to visualize its inside liquid temperature. After trying different display and temperature components on the teapot, we realize that the temperature is also needed for the teapot handle as an indicator to show if it is safe to hold. Since the tea handle is too small for another OLED display, we instead add one RGB LED onto the handle that displays either red or blue color to indicate the hot/cold. Later on, we decide to add a camera module on the inside of the lid to detect the tea concentration so that we know exactly when the brewed tea reaches the best saturation based on our preferences.

Figure 6. Here we brainstorm interactive functionality for the Utah teapot: (a) camera and display to monitor tea shade, and (b) LED and display to indicate temperature.

INTERACTIVE EDITOR FOR CREATING CURVEBOARDS

To support designers in creating CurveBoards from their digital prototype designs, we developed an interactive editor. After loading the 3D model of the prototype, our editor first generates the pinholes across the model’s surface. Next, designers connect the pinholes into a desired channel layout using either the automatic or manual layout tools. Once the layout is completed, our editor automatically generates the channel geometry and provides the fabrication files.

Figure 7. (a) CurveBoards interactive editor UI, (b) Example of a generated pinhole pattern.

#1 Converting the 3D Model into a Set of Pinholes

Designers start by loading a 3D model into the CurveBoard editor. Next, designers click the ‘generate pinholes’ button, which creates the characteristic pinhole pattern across the surface of the board (Figure 7).

#2 Creating the Board Layout

Next, designers create the board layout, i.e. define how to connect the pinholes into power and terminal lines. Similar to 2D breadboards that have a fixed row and column layout that cannot be modified dynamically, CurveBoards are also subject to this limitation once the board is fabricated. However, besides the standard breadboard layout that the CurveBoard editor can generate automatically, designers can also modify the layout manually depending on their prototyping needs, or for maximum flexibility leave all pinholes on the board disconnected, effectively creating a Curve-Protoboard.

Automatic Channel Layout: This method automatically generates a default breadboard layout across the surface of the 3D model. It requires minimal effort from the designer but also pre-defines how components can be placed (Figure 8a). Users can explore different versions of the automatic layout with different channel orientations on the object geometry by pressing “Generate Board Layout” button multiple times.

Figure 8. (a) Automatic layout, (b) manual layout.

Manual Channel Layout: Alternatively, designers can customize the automatic layout or create a new manual layout from scratch using the interactive tools for terminal and power line creation. Designers only have to select a set of pinholes and then indicate the desired type of connection using the corresponding button. This provides designers with more freedom in how to route the channels but comes at the expense of additional manual effort (Figure 8b).

No Channel Connections (Curve-ProtoBoard): Finally, designers also have the choice to leave all pinholes disconnected, effectively creating a Curve-Protoboard. The holes for a Curve-Protoboard are bigger in diameter (1mm vs. ~0.3mm for Curveboards) and fit both a component’s header pin and a wire. While this provides maximum flexibility, it requires additional wiring while prototyping.

#3 Export & Fabrication

Once designers hit the ‘export’ button, the CurveBoard editor generates the final CurveBoard geometry containing all the pinholes and connection channels underneath the CurveBoard’s surface (Figure 9). The CurveBoard editor then exports the geometry as an .stl file for 3D printing.

Figure 9. Generated 3D printable file: (a) in render mode, and (b) in transparent ‘ghost’ mode.

FABRICATION METHOD OF CURVEBOARDS

In the next section, we describe the fabrication technique used for CurveBoards and provide details on the material preparation and mixing procedure.

Dual Material 3D Printing with Conductive Rubber

Our initial goal was to use dual-material 3D printing to fabricate CurveBoards, with one rigid non-conductive material used for the housing and a deformable conductive material (e.g., conductive rubber) used for the channels to facilitate repeated plugging of components.

Since dual-material 3D printing with conductive rubber is still an experimental fabrication technique, we used the external printing services from ACEO, which is the only company we found to offer this type of fabrication (materials: ACEO Silicone GP White for the housing, and ACEO Silicone EC for the conductive rubber, Shore Hardness 30 A).

We developed a range of 3D printed object geometries to test with their printing service, including different channel widths ranging from 0.6-1.5mm. However, we found that even the best of our prototypes still had a resistance of 1.4k ohm per channel (6-hole length). Thus, while the resistance was good enough to light up an LED (see Figure 10 for the 3D printed result), it was not conductive enough to work with other standard electronic I/O components. In addition, the maximum volume currently supported by the 3D printer is 200 cm3, with resolution ~1 mm to achieve reliable prints. Therefore, we conclude that dual-material 3D printing it is not yet suitable for fabricating CurveBoards.

Figure 10. (a) Dual-material silicone 3D printed CurveBoard, and (b) with SMD LED connected.

3D Print Housing + Fill with Conductive Silicone

To address this issue, we developed a fabrication technique based on 3D printing the housing and filling the channels with conductive silicone (Figure 11). It enables high conductivity at the expense of additional manual effort (filling the channels for a CurveBoard takes between 15-30min based on our experience building the prototypes for this paper).

Figure 11. (a) 3D printing the CurveBoard housing. (b) Filling the hollow channels with conductive silicone.

We next provide more details on both the 3D printing process and the task of filling the channels with conductive silicone.

3D Printing: For 3D printing, we used FDM 3D printers and printed on both the Ultimaker 3 and Prusa i3 MK2. We tried layer heights from 0.1mm-0.2mm and found that all created CurveBoard housings with channels that were suitable for filling them with conductive silicone. The prototypes in this paper were printed with a layer height of 0.15mm.

Conductive Silicone for Channels: To find a material mix that is both conductive and easy to extrude, we tested a range of different carbon fiber lengths, carbon-silicone ratios, and needle sizes.

Carbon fiber length: The longer the carbon fibers are the more conductive the mixture is but the harder to extrude. We tried fibers ranging from 0.5mm to 2.5mm.

Carbon-Silicone ratios: A higher carbon amount increases conductivity but makes the mixture harder to extrude. We tried ratios ranging from 3% to 10% carbon.

Needle size: The larger the needle size, the easier it is to extrude the mixture but the more difficult it is to insert the needle into a pinhole. We tried needle sizes ranging from 20G (0.6mm) to 14G (1.55mm).

Below, we describe the mixing procedure that we empirically determined to work best for filling CurveBoards.

To create the conductive silicone, we first mixed 10g of chopped carbon fiber (0.7mm long, 7 μm diameter from Procotex [4]) with 3ml of isopropyl alcohol. After stirring and dispersing the fiber hairs, we mix them into 100g part A of a regular two-component silicone (type: Smooth-On SORTA-Clear 37 Clear [34]) and stir for 5 minutes (Figure 12a/b). The carbon-silicone ratio is 5wt%, with same amount of part B added. Without part B, the conductive silicone will not start curing, i.e. we can keep the mix of part A + carbon fiber on the shelve to use over several days if stored covered.

Once we are ready to use the silicone, we add 100g of part B (Smooth-On SORTA-Clear 37 Clear) to the mix (Figure 12c), which initiates curing. After stirring for 5 minutes, we transfer the conductive silicone to a syringe (3ml, with 16-gauge blunt tip tapered dispensing needle) (Figure 12d).

The syringe can then be used to fill the channels in the CurveBoard. Because silicone is sticky, it remains inside the channels and does not drip out even when the object is tilted. Once all channels are filled, the CurveBoard cures for 75 minutes. For CurveBoard designs with pinholes on both sides (e.g. teapot lid in Figure 6), we tape one side up to prevent leaking, and fill the conductive silicone from the other side. We clean up the residue from the surface afterwards.

Because of the light blue taint of our fast curing silicone (Smooth-On OOMOO 25, curing time: 75 mins), we used a clear, slower curing silicone for the pictures of this paper (Smooth-On SORTA-Clear 37 Clear, curing time: 4 hours).

Average Amount of Material per Prototype: We found that CurveBoards with a volume of less than 400cm3 required on average 3g carbon and 60g silicone to fill all channels.

Figure 12. Mixing the silicone.

Surface Coloring to Visualize Channel Layouts

Since we fabricated the CurveBoards in this paper using standard FDM 3D printers that do not allow for high-resolution color printing, we cannot fabricate the breadboard markers for power and terminal channel connectivity. Instead, we mark the channel layout on the CurveBoard manually using a color marker as can be seen in Figure 13b.

Figure 13. Channel Connectivity: (a) rendering in CurveBoard editor, (b) standard 3D printing with marker drawing, and (c) full-color 3D printing with Da Vinci printer.

With recent advances in full-color 3D printing, we can also use a full color printer, such as the DaVinci , to mark the connectivity (Figure 13c shows an example). However, since full color printers are significantly slower than regular FDM 3D printers due to the additional pass required to apply the color ink, we decided to not use them for our prototypes.

EVALUATION OF CONDUCTIVITY & DURABILITY

To measure the conductivity and durability of our fabrication method, we ran two experiments.

Conductivity: To measure the resistance across different channel lengths we fabricate a 16-hole long channel and measured conductivity by iteratively increasing the pinhole distance between the measurement points. Figure 14 shows the result. When a line is fitted, we can see that for every extra pin hole, resistance goes up by on average of 8.5 ohms.

From all the channels contained in our prototypes, on average >95% are short terminal channels (3-6 holes, 30-60 ohms). This result is also comparable to standard breadboards which usually have >93% terminal channels. Power channels can get long & can have higher resistance (bracelet: 16 holes, 120 ohms, headphone: 36 holes, 600 ohms, the large various of resistance in long channels is likely due to the printing quality of different CurveBoard geometries).

Figure 14. (a) Resistance for different channel lengths, (b) Resistance over 100 plugs.

We wired up a range of components and found that digital signals including PWM worked well, most analog signals (e.g., photoresistors) also worked provided the channels had a stable resistance. Even for the longer power channels, resistance was within the fault-tolerance of typical electronics.

If needed, the resistance can be further reduced using one of the following techniques: Since the resistance R is a function of the distance D and the cross-section area A of the channel (R = k*D / A; k being a material property), we can decrease the resistance by making channels wider. However, this restricts the density of channels on the breadboard and prevents the use of standard components. Alternatively, we can also make the channels deeper, but this restricts the types of objects we can create as the prototypes become thicker. Finally, to reduce resistance, designers can also split long channels into segments and use connecting wires between them.

Durability: To evaluate the durability of the silicone against repeated plugging and unplugging of components, we selected a CurveBoard channel of three-pinhole length and repeatedly inserted a wire into it. We measured the resistance of the channel after every 10 plugs. We found that the silicone has a high durability due to its self-closing characteristics when a wire is removed. Surprisingly, over repeated use the resistance also decreased (Figure 14). We hypothesize that poking the pinhole over and over packs the carbon fibers tighter, which creates a better connection with the wire.

Decay Over Time: We repeated the above experiments with conductive silicone channel bar that were 6 months old. We found no statistically significant difference in the resistance measurements (p-value = 0.577954).

EVALUATION WITH USERS

To gather feedback on designer’s experience when using CurveBoard, we ran a user study similar to the one conducted by Drew et al. for evaluating ToastBoard [8]. Note that our user study only investigates the experience of prototyping an interaction design using CurveBoard and further studies are needed to gather feedback on the CurveBoard channel layout design process and the CurveBoard fabrication.

We recruited six participants (3 female, 3 male). Experience levels for electronic prototyping ranged from moderately experienced (one participant) to very experienced (five participants). After a brief introduction, we randomly assigned participants to one of two conditions (either traditional breadboards + separate 3D printed prototype, or a CurveBoard of the same form factor as shown in Figure 15a/b). Participants completed a task sequence and then switched the condition.

Figure 15. Materials provided to participants in the (a) traditional breadboard vs (b) CurveBoard condition.

The task sequence asked participants to build an electronic circuit on the physical prototype given a schematic, a description of intended functionality, and a micro-controller that already runs working code. The participants were told which electronic components to use but not where to place them, i.e. they had freedom in determining the spatial layout of their interaction design. They were asked to optimize the layout of the I/O components for ergonomic and efficient interaction. We used two different task sequences and randomly assigned an order (both used the same form factor, i.e., the 3D model of the bracelet, but different schematics).

After each completed task, we asked participants to show us their prototype and then gave them the next task. Participants had 60 minutes per condition for finishing as many tasks as possible. The tasks were the same for all participants. Participants were not limited in terms of materials, i.e. they had several breadboard sizes including small 4x4 breadboards.

At the end of the session, participants filled out a questionnaire about what features they found helpful or frustrating, how both prototyping methods compared to each other, and what types of features they might like to see.

Findings

From all participants, 5/6 participants stated a preference for CurveBoard over the 2D breadboard. Below, we summarize additional observations and qualitative feedback:

Attaching the Breadboard to the Prototype: When using a 2D breadboard, participants used a wide variety of methods to attach the breadboard to the prototype. Most participants peeled off the backside of the breadboard to expose the sticky surface and then attached it to the prototype. One participant first applied tape across the entire bracelet and then placed the breadboard on it. Once placed, however, the breadboards were hard to move around in case additional boards had to be added to provide more wiring space for components. P2 stated: ‘I spent a lot of time with the 2D breadboards figuring out the best way to situate and attach them to the bracelet, which I didn’t have to do in the 3D breadboard use.’

Figure 16. Some Prototypes built during user study in (a) traditional breadboard and (b) CurveBoard condition.

Pre-defined Breadboard Size & Component Density: When using 2D breadboards, participants found it challenging to place electronic components close to each other. Participants stated: ‘The 2D breadboard size made it very hard when I was trying to get the OLEDs near to each other.’ (p1) ‘I tried to use as many smaller breadboards as I could to cut down on excess space.’ (p6) ‘I rotated the breadboards to have two breadboards next to each other so that 4 screens are near each other’ (p1). In contrast, when asked about CurveBoard, participants said: ‘Having space to put components all over the object was pretty useful.’ (p4) ‘The 3D Breadboard had all the wire holes already there so I didn’t need to think about where to physically place breadboards.’ (p1) ‘It was easier to move things and redesign the layout.’ (p2)

Requested Improvements: Participants requests included: (1) firmer connections when plugging wires (p2, p4), (2) the ability to customize the channel connections enabled through the interactive editor or a CurveProtoboard (p3, p4), and (3) to also have curved and flexible electronic components available to better approximate the shape (p1, p6).

IMPLEMENTATION

Our conversion software [46] is implemented as a Grasshopper plugin to the 3D editor Rhino3D.

Generating the Pinholes on the 3D Model Surface

To generate the pinholes on a 3D model surface, we first convert the 3D model into a quadmesh (Figure 17c). Each vertex of the mesh represents one pinhole candidate on the breadboard’s surface. To ensure a consistent spacing of pinholes that is required to make DIP electronic components fit, we enforce a fixed spacing between adjacent quad vertices (i.e., 2.54mm as on a 2D breadboard).

Figure 17. (a) orientation field, (b) position field, and (c) generated quadmesh

To convert the model, we use the instant meshes open source library provided by Jakob et al. [14] available on github [11]. To provide the 3D model to instant meshes, we first export the 3D model as an .obj file using Rhino3D’s export function. Next, we prepare the input arguments for instant meshes: Since we want to generate a quad mesh, we set the orientation symmetry (Figure 17a) and the position symmetry (Figure 17b) to 4. Next, we set the crease angle threshold to 90° to enforce quad alignment to sharp folds in the surface. We also set the align to boundaries parameter to true to align the edges of individual quadmeshes with the edges of the 3D model. The final input parameter target face count needs to be calculated based on the mesh geometry. For this, we use the surface area of the 3D model and divide it by the size of the desired quad, i.e. 2.54x2.54mm (=6.4516mm2), which is the standard area used by one breadboard pinhole.

Next, we pass all pre-determined parameters and the 3D model into instant meshes via a Python subprocess function call. After instant meshes calculated the quadmesh, it outputs a list of vertices of the quads (each vertex will form a pinhole), a list defining the connectivity between these vertices (representing different CurveBoard layout-connectivity options), and the converted 3D model.

Our CurveBoard editor in Rhino3D then reads the quad mesh results back, removes the old model from the viewport, and instead displays the new quadmeshed 3D model.

Specifying the Breadboard Connectivity

Each vertex of the quadmesh represents a pinhole. When users’ select the vertices using the VCC, GND, and terminal brushes, the vertices are added to a point lists (polyline) that represents the connectivity of the breadboard channels.

Generating the Geometry of Channels for fabrication

In a final step, we generate the geometry for 3D printing.

Pinholes: To create pinholes, we create a cone 4mm deep from the surface of the object at each pinhole (vertex), narrowing from 1 mm diameter at the surface to 0.8 mm at their tips to minimize channel overlap over concave surfaces. We then perform a Boolean difference operation to subtract the cone geometry from the 3D object, creating tapered pinholes.

Channels: To create the channels, we first offset the points of each pinhole on the channel inward along a vector orthogonal to the surface, which creates a polyline along which the channel will be created. At each offset point, we generate a rectangle centered on this point where the height axis is parallel to the surface normal vector, and the width axis is orthogonal to both the surface normal vector and the polyline. The height and width of each rectangle that we later use for lofting the channel is initially set to the default channel size (width 1.8 mm, height: 4mm). Next, we perform two tests to adjust the channel size depending on the object geometry:

Determine if the prototype is thick enough for the channel: First, we determine if the height of the channel is smaller than the thickness of the prototype at each point of the channel (+ 1mm buffer on top/bottom for the wall thickness for 3D printing). If the thickness of the object geometry is not sufficient, we calculate the smallest height along the polyline and adjust the rectangle height accordingly (Figure 18a).

Determine if channels collide because of curvature of object: Next, we determine if the width of the channel is causing collisions with other channels. To check this, we take the channel rectangles of each channel and test if they intersect with each other. If this is the case, we reduce the width of the channel to be smaller (Figure 18b). In addition, we reduce the risk of collision by rounding all corners of the rectangle (fixed corner-roundness 0.9mm).

As our minimum channel size is 0.5mm high and 0.6mm wide, we support a maximum curvature of ~92.59 m-1 on any pinhole position of the 3D model without channel collision.

Once we determined the correct height based on these criteria, we then loft the rectangles and cap them into a polysurface for the channel. Finally, we perform a second Boolean difference with the model geometry to create the channels that connect the pinholes. This geometry is finally saved as a 3D printable .stl file.

Figure 18. Channel size: (a) adjusting the height to smaller than the thickness of the prototype, (b) adjusting the width to prevent collision with neighboring channels.

LIMITATIONS

CurveBoards are subject to several limitations:

Number of Pins and Curvature of Surface: When electronic components are plugged into CurveBoards, pins may not make contact with the surface when it is strongly curved and the component has many pins (e.g., displays). However, an analysis of standard electronic components showed that most components have pin rows that are fairly short (73/88 sensors on Sparkfun have < 7 pins on a single row), the pins are tightly packed (2.54mm), long (6mm), and can be slightly bent. In addition, with the silicone pads on CurveBoards, the pins do not have to be fully inserted. Therefore, components < 7 pins work on curvatures < 66.22 m-1. We can also use IC sockets which have longer pins and thus adapt better to steep curvature. The largest component on our prototypes is a micro-controller with 38 pins.

Thickness of Required Prototypes: Since our method embeds the breadboard within the physical prototype, the prototype needs to have a certain thickness in order for our algorithm to work. The minimum thickness is 3 mm across with a channel thickness of 0.6mm. As mentioned previously, the channel thickness directly correlates with the resistance.

Regularity of Pin Spacing: While it is easy to create regularly spaced pin holes on a 2D breadboard of rectangular shape, this is more challenging on arbitrary 3D surfaces. While our algorithm minimizes these issues, several pins on a board’s surface are not regularly spaced (due to a deformed quad as the result of quad meshing). We leave those areas empty to prevent invalid pin spacing on the surface.

DISCUSSION

Next, we reflect more broadly on the idea of integrating breadboards into physical prototypes.

Reusability: CurveBoard Templates

Traditional breadboards have a generic form factor, which allows them to be reused across different circuit designs. By designing CurveBoard prototypes as generic templates that represent classes of interactive devices, we can reuse CurveBoards across more than one use case. For instance, a CurveBoard in the shape of a plain bracelet can be used to prototype many different interactive wrist-wearables, such as a smart watch or decorative bracelet (Figure 19).

Figure 19. Generic wearable designs with same CurveBoard template: (a) smart watch, (b) decorative bracelet.

CurveBoard templates thus allow to defer the shape-decision for an individual CurveBoard prototype. Designers can start exploring the interaction design on a template, then refine the shape based on initial insights and create a custom Curveboard afterwards. As with any template, this comes at the expense of less customization for individual prototypes.

Flexible Electronics

While CurveBoards work with flat rigid electronic components, rigid components do not integrate well with the shapes of physical prototypes. Flexible electronics, in contrast, conform better to the shape as they can bend. For instance, a flexible display mounted on a CurveBoard bracelet conforms better to the curvature than a rigid display (Figure 20). While to-date, flexible electronics are still rare and an area of on-going development, we envision that future CurveBoards will primarily be used with flexible components of adaptable curvature.

Figure 20. A wearable E-book reader with flexible display.

CONCLUSION

We presented CurveBoards, 3D breadboards directly integrated into the surface of physical prototypes. We demonstrated how CurveBoards provide a new electronic prototyping technique that allows designers to prototype the interaction design directly in the context of a physical prototype. We demonstrated CurveBoards applicability across a range of different application scenarios and object geometries, described our interactive editor and detailed our fabrication method that uses 3D printing for the housing and conductive silicone for channels. We evaluated the conductivity and durability of the resulting CurveBoards and reported on informal user feedback from participants that used CurveBoard for electronic prototyping. Finally, we detailed out implementation and concluded this paper with a discussion of limitations and opportunities around reusability and the use of flexible electronics. For future work, we plan to extend CurveBoard to support different sized or spaced parts and explore potential avenues for custom-shape electronics that better fit CurveBoard geometries.

ACKNOWLEDGMENTS

We would like to thank our colleagues from the MIT International Design Center for the hardware support, especially Chris Haynes. We would also like to thank Dishita Turakhia for her help with the figures of this paper!

(read more)

Working from home: How to cut your tax bill

Words: - BBC News - 00:44 25-07-2020

If you're working at home, you can claim for the extra electricity you use

If you've been working from home over the past few months, you may well save on travel costs and lunches, but you'll run up other bills, including heating and electricity.

Some employers give their staff an allowance to cover those excess costs.

But if they don't, employees can still claim a small reduction in their taxes for the time they were forced to work at home.

It would be very tricky to calculate exactly how much of your electricity bill is used for work - so HM Revenue and Customs (HMRC) lets you claim up to £6 a week of expenses without having to provide bills or paperwork to justify it.

That doesn't mean you save £6 a week - you only save the tax you would have paid on it. That works out as £1.20 a week (or around £62 a year) for a basic rate taxpayer, or £2.40 a week (£104 a year) for a higher rate taxpayer.

And you must be working at home because you have to, not out of choice.

It is possible to claim more than £6 a week, but you have to provide paperwork to support your claim.

Luckily you don't have to calculate exactly how much you have spent working at home

If your employer provides a working from home allowance, that's tax free up to £6 a week - so you can't make another claim for tax relief on top.

These allowances have been available for years, but awareness has grown since the pandemic forced many offices to close.

"We have had quite a few calls about this," says Yvonne Graham, tax manager at Ensors accountants in Ipswich. "It's not going to make people rich, but it is a useful amount."

Before 6 April 2020, you were allowed to claim only £4 a week without providing evidence. It's still possible to claim back as far as April 2016, but for prior years you can only claim the lower rate.

Emma Peck, who works in financial services in Buckingham, has been claiming the full allowance since she started working from home in 2017.

Emma Peck has been claiming expenses for working at home since 2017

She has no office to go to, so she is obliged to work from home and therefore entitled to claim.

Previously, she was self-employed, and was used to claiming for expenses to reduce her tax bill, as many self-employed people do.

So every year when she gets her "notice of coding" from HMRC, which says how much she can earn before paying tax, it includes an allowance for "Job Expenses" - the cost of working at home.

This year, it's £312 - in other words, 52 x £6 a week.

Emma has encouraged many of her home-working colleagues to claim too. "The thing with people who are PAYE, they are used to having everything done for them. They are scared to go onto the HMRC website."

There are two ways to claim expenses - either on your annual tax return, if you file one, or on a special form called a P87, which is available electronically via Government Gateway, or on paper.

If you're working at home indefinitely, you can get your tax code changed like Emma to get the savings regularly.

If you're only working from home for a short period of time, it makes sense to wait until you're back at work, so you know how much to claim for.

HMRC said in a statement: "Employees can claim the P87 expenses at any time but claiming when they return to their place of work means their claim will be for the right amount and they will only have to contact us once."

Home workers can claim the extra cost of heating

If you've had to buy, say, a computer or an office chair to be able to work from home, your employer might pay you back.

If not, you can claim tax relief on what you have bought, as long as it is used "wholly, exclusively and necessarily" for work.

You will need to keep records, and claim the exact amount.

You can also claim for work telephone calls on top of the £6 flat rate - again, you will need to keep records and claim the exact amount.

But you can't claim for home-related costs that don't increase because you're now working there. That includes council tax payments, rent, mortgage interest, or water - unless you have a water meter.

Jonathan Griffin from South Yorkshire works in IT. He has been working from home since 23 March.

He filed a claim using a P87 form for the tax on seven weeks' worth of expenses from 6 April.

That would come to £6 x 7 = £42. As a basic rate tax payer he will get 20% of that - about £8.40 - back.

He filed the claim online with HMRC on 23 May, which he found "quite straightforward". There is a tracker on the HMRC website, which initially said it would be processed on 15 June.

But at the time of writing the payment still hadn't arrived and the tracker says it is still being processed.

(read more)

Split Mechanical Keyboard Build Log

Words: bcongdon - lobste.rs - 13:57 31-07-2020

July 30, 2020   •   8 minutes

tl;dr: I built a split mechanical keyboard. It looks like this:

At the beginning of the COVID-19 lockdown, I noticed that I was feeling wrist

strain while working at home. I've been using a vertical mouse for several

years, but up until recently I was using a conventional keyboard. I'd seen

anecdotes that split keyboard designs can be more ergonomic, and I'd experienced

that myself when I used Microsoft's

Sculpt Keyboard

during an internship. I eventually bought a

Kinesis Freestyle Pro, and in

the process of researching that keyboard I became more interested in split

mechanical keyboards.

I've been using my Kinesis keyboard for the past few months, and have found the

split layout to be very comfortable. It allows me to have my wrists and arms in

a more neutral position, which reduces strain during long sessions of use.

Importantly, the split layout doesn't affect my typing speed – it's essentially

just a normal QWERTY keyboard split along the middle. Other split layouts (like

the Ergodox) require a longer re-learning period.

Though I really like my Kinesis keyboard, it's not a particularly aesthetically

pleasing object. During my research, I stumbled upon a

community sourced list of split keyboards

by a user of /r/mechanicalkeyboards.

This inspired me to want to build my own split keyboard.

It was around this time that I found Keebio‘s online store,

which sells PCB and case kits for split keyboards. Many of the split keyboard

PCB sellers I'd seen online focused on minimalist layouts (<=60% layouts), which

I didn't want. Keebio had 2 models that interested me: the Quefrency

(pronounced “key-frency”) and the newly-released Sinc, both of which were full

sized:

The Sinc board had a full set of function and macro keys, so I went with that

model. Keebio's boards tend to go out of stock quickly, so I was lucky to get

one from the first production run of PCBs…

The Sinc PCB I ordered came with a microcontroller and surface-mount diodes

already soldered to the board, which made assembly quite easy. From what I saw

online, it's fairly common to get a “naked” PCB, and you have to bring your own

microcontroller (usually something like an Arduino or ESP32) and solder on a

diode for each key switch.

I was pretty impressed with Keebio's PCBs – the Sinc even came with underglow

LEDs and a reset button on each half of the board.

The Sinc PCB design supports a number of layout variants. I used the excellent

open source keyboard-layout-editor.com

to plan my layout. Though, there weren't a ton of decisions to make; the biggest

areas for customization were the arrangement of meta keys on the right side of

the board and the layout of the keys surrounding each space bar.

Layouts supported by the Sinc

I opted for a layout that included a set of arrow keys and a larger right space

bar:

One of my motivating reasons for building a custom keyboard was to make

something that “looked cool”, so I spent a fair amount of time researching

keycap designs and styles. Since my keyboard had a split layout, many

traditional keycap sets wouldn't be compatible. Notably, I needed a set that had

support for the shorter split space bars, and the shorter right shift key.

After a bunch of comparisons, I settled on PimpMyKeyboard's

DSA “Granite” set.

The DSA keycap profile is uniform and flat, and aside from ergonomics I think

that they just look nice.

DSA Profile

Source

Of course, one's choice of key cap is almost entirely based on aesthetics, so

this was one of the more fun parts of the process. In particular, I liked the

choice of font and the size of the key labels; I think it lends the board a good

mixture between “retro” and modern design.

I opted for symbolic modifier keys (i.e. the shift symbol (⇧) instead of the

written out “Shift” word). I'm pretty happy with how they look, and I like that

I was able to repurpose some of the more obscure modifiers (i.e. scroll lock,

insert, etc.) as macro keys, because their meaning is ambiguous enough to be

repurposed.

The tricky part of choosing keycaps was making sure that I got full coverage of

all the keys I needed. The layout I chose had non-standard space bars and some

other funky keys (non-standard right shift and modifiers). Fortunately, PMK made

it easy to customize my keycap set. Ultimately, I ended up needing:

One goal that I had when building my Sinc was to support hotswappable key

switches. One of the benefits of mechanical keyboards over traditional “rubber

dome” keyboards is that you can customize the feel of the key switches. Some

people prefer tactile or clicky switches, and others (like myself) prefer a

smoother keypress. Currently, I've really been enjoying using linear (Cherry

Red) switches, but I could imagine wanting to try out different switches in the

future.

Making a keyboard “hot swapable” was easier than I anticipated: You solder a

small socket into each through-hole in the PCB, and then the key switches are

effectively just pressure-fitted into those sockets.

I found a source of PCB sockets

online,

and ordered enough to have two for each key. These sockets were surprisingly

expensive, given they're just tiny pieces of metal. But, they're a lot cheaper

than a whole new keyboard – which is otherwise what you'd be looking at if you

wanted to try out a new switch type.

Normally, you'd solder the key switches directly to the PCB, skipping the socket

step. Surprisingly, soldering the sockets instead of the switches themselves

actually made assembly easier. I messed up on the placement of a couple of the

keys , and instead of having to desolder the switch, I could just solder in

additional sockets (consequently, it helps to buy some extras!).

As for the actual key switches I used, I bought a set of Gateron Red switches

from

KEBO.

I use Cherry Red switches on my Kinesis, and like the linear switch type.

However, I'd heard good things about Gateron's version of the Red switch (and

they're slightly cheaper).

Now that I've typed on the Gateron Reds for a bit, I can't say that I notice a

huge difference between them and the Cherry Reds. They do feel a bit “looser”

and that they require a bit less force to actuate, but I can't really say that

they feel any “smoother”, contrary to claims I'd seen online.

When I ordered the Sinc PCB from Keebio, I also ordered a set of plates to use

as a case. The design of the plate is really straight forward: two pieces of

metal with cutouts for the keys that are fastened together by several screws and

standoffs around the perimeter. However, I was initially a bit disappointed

about how it all fit together. The PCBs’ USB-C and TRRS sockets give the board a

bit of height, so they didn't sit flat on the bottom plate. It took a fair

amount of tinkering to get the case to close correctly, and even then I felt

like I'd done something wrong.

Making matters worse, the build documentation was fairly lacking (the portion

about assembling the plate kit was copy/pasted from an older design), but since

the Sinc is a new product for Keebio, I'll give them the benefit of the doubt.

After enough messing with the tightness of each screw, I did get a final result

that felt sturdy enough. I also put some rubber feed on the bottom of the plates

to prevent the keyboard from slipping on my desk.

I really like the way that the keyboard came out! Aesthetically, I think it

looks pretty cool, and ergonomically it's nearly as comfortable as my Kinesis.

I showed my completed keyboard to some coworkers, and one of the first questions

I got was “what does the Space Invader key do?". My answer was… “nothing yet”.

I still haven't decided how to map each of the macro keys, but the Sinc supports

the open-source VIA Firmware (as well as

QMK), which makes it really easy to setup macros. My current

favorite macro key is one that automatically locks my machine. Dedicated media

keys are also handy…

Building a custom mechanical keyboard was certainly more expensive (not to

mention time-consuming) than purchasing a pre-made one, but that's

understandable given the economics of short-run manufacturing.

Overall, this was a fun project. It's rewarding to have built something that

I'll continue to use daily, and I enjoy seeing the finished keyboard on my desk

– it's almost like a functional piece of art.

(read more)
1
2
3
4
5