It is the 2087th of March 2020 (aka the 16th of November 2025)
You are 18.97.14.87,
pleased to meet you!
mailto:blog-at-heyrick-dot-eu
The final Letterbox, and a problem
The revisions to the letterbox code to deal with the glitching seem to be working well. I have tweaked the code to only turn off the LED blink by actually pressing the button, rather than when the status page is looked at in a browser.
Normally, this would likely have been the final revision of the code and I could just leave it running as it was. However, about partway through the process of flashing the code to the device it reported a checksum error, and from that point on failed to connect. Serial output is working fine. The rest of the device is working fine. I just cannot get it to connect no matter what serial interface I use.
I wonder if the USB serial device was outputting slightly more than the expected 3.3V and, after long enough of being shouted at, the pin was like "nope, I'm done"?
I reached over, picked up another ESP32-Cam - the one with my recent camera server code on it - and flashed the letterbox firmware into it. Then I ordered a replacement ESP32-Cam from Amazon for... something like €8, they aren't expensive.
You're probably wondering why I didn't just get a regular ESP32 with plenty more I/O. Well, the reason for that is because I have already put together a small circuit board for the camera device and, really, a few quirks aside, the smaller board is more than enough for this task - two inputs and an LED output. There's nothing fancy like PCM-controlled LED wibbling, and the inputs are simple buttons so it's just a high or low reading.
Plus, the one I picked up is now "the letterbox device", the replacement is to, well, replace it. Clue in the name. ☺
As an aside, I would have preferred stripboard, but it seems that most of what can be found on Amazon (and a few other vendors) is this weird stuff with through-plated holes that aren't connected to each other. I think the idea is you drag solder blobs around to join holes to make your own tracks, but my solder just wasn't playing that game. I had to find some thin wire, tin it, then lay it down and tack it to the holes to make tracks that way.
Left is up, it is fitted to the side of the meter cupboard.
The new camera just arrived, next day delivery. I walked up to the top of the lane to get it, taking a mug of tea with me of course. It is running some ancient old firmware, but it's enough to set itself up as an access point and offer the early camera web server (with the non-working face detection stuff). The OV2460 camera module is pretty awful, though. Far too many damaged pixels. So I just gave it a three-star review on Amazon. The ESP32 appears to work, and I like that it has USB-C rather than being a micro-USB device as is still quite common (primarily because it seems that most of my micro USB leads are for charging and not data which means when I want to hook something up, it's like "Dammit, again?".
I have a few OV2460 modules around, so I can just replace this with a better working module, but still...
(I wonder if less scrupulous companies are flogging off quality control failures/returns)
An example low-light photo showing the faulty pixels.
Inserting a different camera module, everything works fine. This particular module has the sensor 90° rotated, which is why the phone I'm holding here appears on the right of the shot that you can see.
Watching you watching me...
Governmental failure
Last year, Michel Barnier was chosen to be the Prime Minister of France. He made some moves that were highly unpopular and in order to shore up support he held a vote of confidence.
Which he lost dramatically.
This year, it's another old friend of Macron who is Prime Minister and it's an even more ridiculous attempt at removing worker's rights. This time by removing two "less important" public holiday days, claiming that France needs to be working harder and producing more (rather in keeping with Macron's thoughts about the French being workshy), even though the INSEE (French statistics guys) don't think it'll make much of an impact at all on the public deficit.
I should add a quick note here to say that whilst public holiday days are paid holidays, they only happen on the day. There isn't the British idea of deferring to a Bank Holiday Monday. If you're a Monday to Friday worker and the holiday is on a Sunday, well too bad. Maybe next year...
This comes at a time of inflation where minimum wage (about the same as the UK) is hardly keeping up with inflation. Where rather blatant price gouging and shrinkflation are depressingly normal (there was a law about supermarkets being required to show an indication of a product where this has happened listing the previous price, the new price, and how much it has gone up by - I have yet to see one single example of this signage). Where the price of electricity has certainly risen more than the supposed price cap would have implied. The purchasing power of normal French people has been hit hard, lower income families are struggling, and the government that is supposed to be keeping everything in check wants to take more from the masses that don't have, rather than those who do have.
That's not to say I'm in favour of windfall taxes for the rich. A one-off dose of cash is a sticky plaster on a bigger problem. I think it would be far better to close the loopholes that allow such rampant tax avoidance in the first place.
But, of course, just like in the UK it is rich people running the country so they're hardly about to vote against their own self interest. Even though it makes perfect sense that a person earning about six hundred and fifty times when I'm making should pay six hundred and fifty times more tax and contributions. For clarification, that's only earning a million a year... That's not even "rich" by today's standards.
Anyway, the incredibly smart move of François Bayrou is to...
...yup, you guessed it. To hold a vote of confidence on the 8th of September. Pretty much everybody expects the inevitable to inevitably affirm that obvious things are indeed painfully obvious.
Meanwhile the CGT (a very militant French union) is calling for France to down tools on the 10th of September. And maybe again on the 18th, and as often as is necessary until Macron gets the message and calls a general election.
At the moment these are not a "declared strikes". What this means is that while going on strike is a part of French mentality and work culture, it - like everything else in France - is regulated. Employees can down tools in response to a declared strike as it is seen that they are taking action to protect their rights. If, however, they just refuse to work because a union leader said so, then they can be sanctioned for abandoning their position.
The danger, however, of kicking out Macron and having an election and putting some populist twat into power is that it isn't a solution. France's economy is not in good shape, and while it is entirely reasonable that the little guys rebel at the idea of being asked to take the brunt of the savings when it wasn't them that made the decisions leading to the current mess, there will need to be some changes along the way. The current situation is not sustainable, and the problem with populists is that they offer empty promises by blaming others. I say this specifically because it is looking increasingly likely that the UK will do something dumb like elect Farage. That would be, basically, half-arsed Trumpism; and when you have a President unlawfully slapping tariffs on everything in sight and running the military into Democrat cities due to some perceived notion of kicking out illegals... that's not really a direction you want to go in.
Fixing democracy
It seems rather clear that having rich people running countries is only really going to be of benefit to them and not to the citizens.
Maybe it would be better, instead of having elections and lobbyists, that government is chosen according to different criteria, such being randomly chosen from the registered citizens according to the following conditions:
Is at least 26 years old.
Has experience in the domain in which they are expected to serve (for example, the Education Secretary should be a teacher, not a plumber).
Is mixed across geographic locations and classes; so not a bunch of upper middle class people from the Home Counties.
Is not mixed across other more personal criteria such as religion, ethnicity, or gender identification (etc).
Don't misinterpret this, I'm not saying no blacks or whatever, I'm saying that these criteria should not be considered relevant, and indeed collecting and collating it is liable to be problematic.
Is an upstanding citizen; which means absolutely zero infractions or police cautions. If you have so much as a parking ticket or speeding fine, then you're disqualified.
And, finally, is able to complete a test (in standard native language) demonstrating an understanding of how the country and judiciary actually functions.
These people will serve for a period of four years, and be therefore ineligible for further service. Their jobs will be protected, and they will be paid a standard wage according to which level of government they are actually in (standard MP versus cabinet, for example).
All forms of lobbying will be outlawed, whether cash bribes or promises of good things in the future. To enforce this, two things will be enacted. Firstly, the MP that reports the bribe will be given a good bonus when the bribe is proven; and secondly the CEO of the company/organisation involved will be considered liable and dragged into court. If proven, will be jailed.
There's a lot more that needs to be looked at, such as the civil service and who advises the government and upon what criteria, but I can't help but think that a government of the people for the people, and a rather more literal interpretation than in the Gettysberg Address, may be a better way forward than having the likes of Starmer, Badenoch, and Farage insulting each other while nothing changes.
Thoughts and prayers
A few days ago, a deranged nutter with legal access to weaponry fired into a church. A church containing children, some of whom were injured and, given the bullets, some of whom died.
Cue a procession of people talking about thoughts and prayers. Or in the words of Trump, "Please join me in praying for everyone involved!".
Screw your prayers.
Those children were literally in a church, praying. Where did that get them? Oh, yeah, traumatised and/or maimed, or dead.
The problem is staring everybody in the face, but it's easier to appeal to a mythological angry sky fairy than to admit that there's a problem.
Now, I'm not an expert on American things. If we look at the second amendment, it says, quote:
A well regulated Militia being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.
That's the version enacted in Maryland. There are other versions, they mostly differ on the number and placing of the commas. Now, to me, it seems quite logical that what is being said here is that the citizens should form their own sort of army, a "well regulated militia" in order to repel overreach by federal government...like, um, what we are seeing today with Trump and the National Guard. However, somewhere along the way those commas became more important to separate the phrases. Not to say "you can keep guns and be a part of a militia" but rather "you can keep guns [comma, separate clause] and be a part of a militia", implying that the two are not necessarily connected. Of course, there are about as many opinions and interpretations of the second amendment as there are websites talking about it. Also the judiciary is quite happy to apply 20th/21st century thinking to something written in 1791; a time when modern firearms would have been a pioneer's wet dream. So, as I said, I'm not an expert so I'm not really qualified to have an opinion on this, or the logic of open carry just to go pick up a box of eggs from Walmart (not every state, but there are some, down there, you know where).
When I can do is propose the question we should all be asking: How many more children?
Time after time, a suitcase of memories.
One Core, Two Core, Red Core, Blue Core
As reported in TweakTown, Apple is looking to cook up a chip - the so-called M8 - which will have up to 256 CPU cores and six hundred and holy-crap GPU cores, in other words, turning it up to twelve because they can.
If we take a look at the current M4 processor, each GPU core offers 16 execution units and 128 arithmetic logic units; which means if the M8 offers similar we could have a piece of silicon with 256 general processor cores, and a GPU with 10,240 execution units and a piss-taking 81,920 ALUs. Plus, of course, whatever Neural processing units are also included because AI is a thing that everybody is fawning over (you'll start to see a bunch of AI summaries that nobody asked for turning up in Firefox).
Yes. Read that again. 256 CPU cores, 640 GPUs with a possible ten thousand EUs and nearly eighty two thousand ALUs. Remember when a dual-core CPU was something you bragged about in the pub? Now it's "cute". Even the little ESP32 microcontroller that you can get for under a tenner is dual core.
If this keeps up, you'll need your own generator to run it.
The only snag is that, well, most of our software still runs as if it's 1985 and we're writing something in BBC BASIC. One line at a time, in order, and god help you if you want to do two things at once.
So you fire up your shiny 256-core MacBook UltraMax ProPlus MegaExtra, start your word processor, and what happens? One little sad thread trudges along, while 255 cores twiddle their silicon thumbs and wonder why they even exist.
Okay, that's not entirely fair as the operating system will spread the tasks out amongst the available cores, but as I noted when I commented how awful and slow the Arduino IDE is, it was maxing out one core of my computer. Meaning that it was using around 25% of my machine's processing capacity. The other 75%? Largely unused. Other tasks and Linux itself ran on them, but me waiting for the IDE to do something is hardly stretching the capabilities of my computer. But that's what happens when you're a single thread.
Of course, we can write parallel programs. One that just fires up a bunch of threads, sort of like this:
This is what parallel programming looks like today, but that's more a crime scene than a program, isn't it? It is basically taking a linear program and breaking it into chunk sized pieces for easy consumption.
Have fun dealing with locks and race conditions, especially those ones that only turn up on alternate Thursdays when it is raining. Concurrency is scary. The simplest way to deal with it is just don't. But like RISC OS dealt with potential re-entrancy and race conditions by banging off the interrupts all the bloody time (not to mention smashing the system stack every time it could), that sort of thing just doesn't scale. It's great for a single processor, but those days are long gone.
Keeping doing things like that, you'll be running about fifty tasks (a few user applications and all the background crap that you never notice until it goes wrong) and the other two hundred cores will be like "Mate, we exist!" and the OS might generously allocate some of them a boring job like "no, the mouse hasn't moved since the last fraction of a nanosecond", while the heavy lifting is now spread across as many cores as the programmer thought to create threads for, but it's still not going to be even close to exercising the capabilities of the hardware.
I can't help but think that there is going to be a massive paradigm shift coming. We'll be obliged to stop writing stuff in C because we'll cease having linear programs that do things like "add this, shift this, dance the hoopla over that array, and hit the weed and the booze because you'll need it to work out what the hell int *(*(*ffs[12])()) () is actually telling you. One step, then another, and the one following, all broken down into micro-operations. A processor doesn't understand a JPEG of a kitten, it understands loading a value into a register and adding it to another, then writing the result back someplace. And using these relatively simple logic operations that basically amount to shuffling values held in memory, transforming them as they pass on through, we provide the illusion of documents, of cute girls in adverts for products you have no interest in, in that epic rock opera you just spent an hour enjoying. But it's all very rigid and linear and exactly proscribed down to the level of a bit of software written for a PC just won't work on a RaspberryPi, and not because it's Windows versus Linux (or RISC OS), but because the language of the processor, the set of instructions that make the program, aren't even remotely similar. Indeed, I think the RISC OS world is maybe starting to realise that the 64 bit (read: modern and trendy) version of the ARM is a totally different beast to the 32 bit (read: ancient and soon to be forgotten) version. But, asides from various internet scripting like PHP and JavaScript, and operating system based scripting like Python and Lua, most software passes through a thing called a compiler that takes the program as written by the programmer and translates it into these low level operations as understood by the processor.
I think all of that is going to stop soon, because although single thread programs are horribly wasteful of resources on modern machines, multi-thread programs are scary to create, and I can't help but feel that it's not really a viable way forward. Sure, my Linux machine currently has nearly 700 threads running, and Linux will spread the work out amongst my pitiful four cores. But, again, Arduino IDE (or, more correctly, Java): When one program needs the power it just can't get it because it's a single thread that will max out a single core.
Instead of a linear program that gets compiled into a list of instructions to hand to a processor (and/or GPU), we'll use a higher level language that will describe tasks. You won't care if it runs on the CPU, the GPU, or that "neural engine" thing that is basically silicon voodoo. You'll just say:
This is what I want done.
This is how the pieces fit together.
And the runtime will figure it out. No threads, no mutexes, no jumping through hoops like a trained dolphin in order to be sure that concurrency doesn't cause everything to explode in a shower of digital sparks.
Note, in particular, the second part, the "how all the pieces fit together". You're telling the runtime what tasks depend upon something sorted out in other tasks. Each task runs as soon as its inputs are ready. The parallelism is implicit, you don't need to babysit it, in fact you don't even care. The runtime just makes it all work. Which is exactly how it should be. Especially if you're going to have a chip with that number of processing units, of whatever type.
Furthermore, the runtime can work out if it's a job that's best suited to running 10,000 times in parallel on the GPU, or to push the whole lot to some of the lower power cores because the battery is screaming blue murder.
But, wait, that can't possibly work. Your code compiled for an ARMv9 sure as hell isn't going to run on the GPU. And... let's not even look at the NP as it'll make your head spin like Regan MacNeil.
Correct. All of the bits have their own specialities. The CPU does everything. The GPU does the same thing a thousand times in parallel and the neural core is black magic that we'll quietly gloss over, and most devices these days have a number of co-processors for deftly handling certain computationally expensive things such as colour translation, working out DCT blocks or Huffman tables, calculating hashes and checksums...
And let us not forget there are loads of CPUs (ARM, x86, MIPS, RISC-V, Xtensa, AVR, PIC, 8051...) and there are equally loads of types of GPUs and they're all different.
So how can we deal with this?
There are three possibilities.
Interpret everything, make the task description be a script language: This would be painfully slow. Imagine "watching" your favourite TV series by having a friend watch it (you can't see or hear), then stop for a bit to describe to you what just happened. Repeat until the credits roll.
JIT, or the JavaScript trick: The runtime will throw together your description into code for the CPU or GPU or whatever the first time. The first time it'll be slow. But then that bit of code will be cached and can run directly from that point on.
Hybrid: There will be a prebuilt library of code for common things, and it'll JIT all the rest.
The third option is the viable one. Especially as given how many many programs simply piece together standard library functions with a bit of custom code to do "whatever". Well, there's a lot of scope for a decent selection of library functions that can be automatically tuned to the hardware, the runtime can figure this out when it is first installed.
Does it sound hard? Well, isn't this basically what machine learning systems currently do? All of the magic that happens when you ask an LLM for a picture of a cat.
Little Emily is here to reap your soul.
So why aren't we doing this already?
It's because the software world has the same problem as Britain's railways. We keep slapping new tech on top of infrastructure designed in Victorian times, then act surprised when it can't cope.
The languages we use - C, Java, Python, even the trendy Rust (and of course BBC BASIC!) - are all essentially serial and linear by default. Some have parallelism but this is bolted on. You can do it, but it's clunky, fragile, and if it breaks, it tends to do so in a way that costs you a small piece of your soul.
So where does this end? It ends when we stop measuring "progress" in how many cores we can stick into a die, and start measuring it in whether programmers can actually use the bloody things without they or their users sacrificing their sanity.
The future isn't an ever-increasing number of cores, it's a universal task language along with a runtime that can work out how to translate that to whatever silicon is available.
Until then, we'll be "holy crap" at 256 cores, until the 360 core monster comes along, which will then be sidelined by 512 cores and... it won't be a pee your pants in astonishment moment as FIVE HUNDRED AND TWELVE CORES!!!!!1!!1! is best written on the box to fool the clueless because in reality most of them will spend their entire silicon lives doing nothing more exciting than saying "nope, the mouse pointer still hasn't moved".
We don't need more cores. We need to devise a way to just declare what work needs doing. The machine can figure out how to do it. Because honestly, if a computer with multiple hundreds of cores can't work that out for itself...what's the point?
Your comments:
Please note that while I check this page every so often, I am not able to control what users write; therefore I disclaim all liability for unpleasant and/or infringing and/or defamatory material. Undesired content will be removed as soon as it is noticed. By leaving a comment, you agree not to post material that is illegal or in bad taste, and you should be aware that the time and your IP address are both recorded, should it be necessary to find out who you are. Oh, and don't bother trying to inline HTML. I'm not that stupid! ☺ As of February 2025, commenting is no longer available to UK residents, following the implementation of the vague and overly broad Online Safety Act. You must tick the box below to verify that you are not a UK resident, and you expressly agree if you are in fact a UK resident that you will indemnify me (Richard Murray), as well as the person maintaining my site (Rob O'Donnell), the hosting providers, and so on. It's a shitty law, complain to your MP. It's not that I don't want to hear from my British friends, it's because your country makes stupid laws.
You can now follow comment additions with the comment RSS feed. This is distinct from the b.log RSS feed, so you can subscribe to one or both as you wish.
Zerosquare, 31st August 2025, 00:47
I'm afraid you didn't address the root cause of the problem.
The puzzle pieces you mention (languages designed specifically for parallel processing, efficient JIT compilers, auto-parallelization) not only do exist, they're already being used. Processors, compilers, web browsers, 3D graphics drivers... all of them pull every possible trick to convert the code they're given into something that executes efficiently.
But that only goes so far. Because the ugly truth is that parallelization, and concurrency in general, is one of the hardest problems in computer science.
Some common workloads are inherently very difficult, or impossible, to process efficiently with a parallel architecture. Others are better behaved, but only if you are extremely careful in the way you design the algorithm and the input data.
And if the theory wasn't hard enough, there's also the fact that it is much easier for human brains to reason in a serial manner. That's why most main computer languages are still serial, not just because of conservatism. Languages that aren't require a mental paradigm shift, something that's a significant obstacle for a lot of programmers. So they get little popularity and usage.
There's simply no magic language or tool that can turn a high-level "here's what I want to achieve" description into "here's an efficient implementation for that hardware". Fields in which code performance is critical employ highly-skilled people whose job description is exactly that.
But the managers and beancounters don't want to pay for highly-skilled programmers. They want cheap, easily replaceable ones ; or even better, none at all. In the 90s, the "no programmers needed" buzzwords were 4GLs and RADs. Today, it's AI. So don't expect things to get any better soon.
jgh, 31st August 2025, 00:57
"All forms of lobbying will be outlawed"
And how do you do that? When Group A has the power to make decisions affecting Group B, *of* *course* Group B are going to try and influence what decisions Group A are going to make. The only way to avoid this is to severely limit the ability of Group A to do anything that affects Group B.
jgh, 31st August 2025, 01:14
"I think the RISC OS world is maybe starting to realise that the ...ARM64 is totally different to ARM32..."
Only now? Have they had their eyes closed? I was up to my elbows trying to write an ARM64 assembler for BBC BASIC... checks datestamps... six years ago, and most of my effort was tearing my hair out at the lack of accessible documentation, and the complete "otherness" of the instruction set. And ARM64 had been around for a decade by then. ARM64 is *NOT* "ARM", it is a completely different processor.
I think I spent most of my time on the project working out and writing up instruction set encodings than actual coding. Some of the encodings make the later 80x86 family look sane.
C Ferris, 31st August 2025, 10:38
Didn't know that humans new how their brains worked :-/
Rick, 31st August 2025, 15:07
Zerosquare: That's why I was a bit vague on the details, simply because there isn't really an ideal existent solution to the problem; suffice to say that giving an increasing number of cores to a program using very few (if not only one) isn't really a solution either. It seems as if we have reached an upper bound to the sorts of clock speeds we're able to make transistors flip at, which is why the ever-increasing number of cores. So maybe it's time for a large-scale rethink?
JGH: There's a lot to work out in the proposal, but it seems all too often lobbying is effectively legalised bribery. That needs to stop.
Colin: We don't really know how brains work, but it doesn't take much imagination to understand that people think in a serial one-step-after-another manner. Many of us are horrible at multitasking, stereotypically women are better at it, but even that has limits. One might be able to hold a small child *and* iron *and* talk on the phone *and* watch Corrie at the same time, but that's only four things with a fairly slow context shift between each of them. When faster context switching and actual concentration is required, we just can't. Ask anybody who has had a near miss while driving and fiddling with their phone when they were supposed to be watching the road.
Rob, 31st August 2025, 20:43
Fourth compilation type. Compile to an intermediate code that can then run on a variety of architectures, just needing a support/translation layer coding for each one. I gather Java does it this way, but I was writing code that ran this way back in the 1990s (BOS Global, copyright 1982) that I could copy the compiled code across to x86, pdp-11, even I think Z80 based systems, with no changes needed. No idea how well any of that would work on multiple cores, but as the os supported multiple (dumb serial terminal) users doing multiple things per machine, I imagine it'd do that part pretty well.
Tom, 1st September 2025, 22:30
The programming model needed for massive parallelism should be CSP. I think it is not "... one of the hardest problems in computer science" as the theory was already there in the '80s, the Transputer was based on it. Nowadays one can try Erlang to get an idea (its inventor Joe Armstrong explained it is possible to speed up near N times on an N-core machine when shared state is minimized). It needs some rethinking of algorithms, like mapreduce instead of linear search. Multithreading in C is way too complex and as long as it uses a single heap it has too much shared state.
This web page is licenced for your personal, private, non-commercial use only. No automated processing by advertising systems is permitted.
RIPA notice: No consent is given for interception of page transmission.