I’m old enough now to know that some things aren’t targeted at me, and that my distaste of something doesn’t translate into broader artistic merit or usefulness. That said, I watched my first YouTube shorts today, and left more… confused than I thought!
I don’t even know how it started. One minute I was catching up on someone talking about a Sanyo MBC system from the 1980s, and the next I was watching a vertical video about a carved watermelon lamp with single-word subtitles that looped for some reason.
To be clear, is an idiom. Vertical video makes sense, given people are watching this sort of disposable media on their phones. I’m all for the accessibility subtitles provide, though I’d prefer they weren’t baked in and didn’t take up half the screen. The looping I don’t get; whenever I see people watching these sorts of videos they’re swiping as soon the previous video finishes.
Curiously, it was the stilted audio I found most jarring. The Overcast podcast app was made famous for the somewhat grim idea that contemplative silence isn’t a natural part of speech, but wasted time to be cut so you cram even more audible thoughts into your brain hole. But this takes it to a level where the previous word is barely finished before the next begins. I’m trying not to make a broader point about attention spans here, but it’s unavoidable. Are we that time starved? Or will people stop watching if the wholevideoisn’tonelongrunonsentence?
I know, I know, get off my lawn!
But there was one other aspect to this where I think criticism is warranted. I’d say at least half the videos that were regurgitated to me started with the same voiceoveer:
In this video, we can see this person…
Half of these videos aren’t even original. They’re stolen wholesale, and overlaid with text and an AI voiceover that explains in painfully obvious detail what we can already see.
… is building a beautiful desk. The first step involves picking up a hammer, which he uses to hit a nail into the side of…
No, really!? He was using a hammer to drive in a nail? Not a tool to bludgeon himself in the head!? In his DAMNED HEAD!?!?!?!
I’m perhaps showing my ignorance here, but I thought YouTube had famously onerous and overzealous piracy protections? I’m surprised these videos aren’t being flagged almost immediately. Or maybe they are, and the shorts I watched earlier today are already gone.
It makes me wonder what a Short would be for NetBSD.
I’m going to show you how easy it is to install package source! This is the tool that lets you install software on your NetBSD operating system! First, run
suand pipe thisfetchintokshwhich…
Hmm, maybe not.
So how would I summarise my first—and hopefully last—experience with YouTube Shorts? I dunno, that seems like a waste of time given you’ve already read the post. You do you, but I’ll continue to steer well clear of this sort of content (a deliberate word choice).
By Ruben Schade in Sydney, 2025-10-28.
There was a time before Chrome where software versions were comprehensible to mere mortals. They still are for some software, but I couldn’t tell you off the top of my head (or any other body part) what version of Firefox I’m currently running, or LibreOffice, or honestly even macOS on my work machine. There’s something to be said for remembering version numbers, and maybe even the idea of “living specifications”, but that’s a discussion for another time.
What I can say is that I do remember specific version numbers for software that had an impact on my life in positive ways. Here are just a few:
NetBSD 1.6/macppc: This was my first BSD, which I ran on an iBook G3. It felt like such a massive performance upgrade from those early versions of Mac OS X. I still remember sitting at school pouring over the NetBSD Guide and getting it configured; it felt like electronic LEGO.
FreeBSD 7: I first ran 6.x, but it was 7 where I really hit my stride with it. I still have all the installer ISOs for it, and the disk image backup from my first home server that ran it. I’ve been using it ever since.
Debian Wheezy. This was the Linux distro and release I first supported at my longest job. Incidently, it was the last version not to mandate systemd.
Lotus Organiser 4: This was the release that came with my old man’s work ThinkPad back in the day, and what I ended up using as my life organiser for many years.
KDE 3: This was peak UI. I’ve come to love Plasma, but I’m struck by great it is every time I fire up a retrocomputer.
TextMate 1: I settled on MacVim as my primary GUI editor on my Macs, but that original TextMate was stupendous. I’ve since moved to Kate from the KDE project for more complicated projects, but I’m still hit in the feels when I launch it on my classic machines.
Lotus SmartSuite Millennium Edition and Microsoft Office 97. I used these well into the late 2000s for school work, uni assignments, and personal projects. I’ve long since moved to OOo/LibreOffice, but I think these represented the usability peak of graphical Office software. I’m surprised that they feel more responsive on my Pentium 1 than Office for Mac does on my current work M3 MacBook Air.
By Ruben Schade in Sydney, 2025-10-28.
hello ,
I'm trying to install NETBSD 10 on RPI4.
The kernel start but when i'm connecting my USB keyboard (HP) , i've got a kernel panic in USB stack .
It's same pb with official netbsd img or when I use the release build from souces on my PC .
I've rebulld from sources 9.4 version and it works fine on my RPI4
Hi all,
I have a working installation of NetBSD on a raspberry pi 4.
I am trying to build the kernel with both GENERIC64 and a custom config.
Regardless of using build.sh or config/make depend/make, I get an error in aarch64_machdep.c saying MACHINE is not defined.
Digging up a bit, MACHINE is defined in aarch64/include/params.h
That file should have been included by sys/params.h
I tried specifying -m evbarm -a aarch64 with no improvement.
Did anyone have the same issue? If so, how did you solve it?
I recently installed NetBSD 10.1, i got it set up, as far as i can tell, i was able to get networking and pkg_add working and i installed pkgin, then i used pkgin to get CDE, i did all this under root, but i also have a regular user account. I then did everything the README.pkgsrc and README.netbsd in /usr/pkg/share/doc/cde said to do. besides the second-to-last section in README.pkgsrc about the dt login manager. Though after editing my "/etc/man.conf" to "ADD ${PREFIX}/dt/man to /etc/man.conf" I now get an error that says line 71 is corrupted in it when i try to makemandb, but line 71 is unmodified, its just the _sparc64 section of machine classes per machine, and was there before i edited man.conf.
when i run startcde it shows the starting the comon desktop envrioment screen and the loading cursor, but after a few seconds it goes back to the terminal and it says "connection to x server lost" and the "waiting for x server to shutdown server terminated sucesfully closing log file" thing. i can startx fine, but cde doesn't want to do it's thing.
I want to take you back in time to a few moments in my life, with apologies to the crew of the USS Relativity who’ve warned me many times to abstain from such activity. Funnily enough, the recording of that episode coincides with the first step in the story.
In the glorious year of 1999, Lou Bega had reminded us of Pérez Prado’s fifth mambo, Revolutionary Girl Utena had blasted onto the big screen again, and my sister and I were hired in Singapore to do voiceover work for the Discovery Channel. Every few weeks we’d go to Opus Studios in Clarke Quay and record various “bumpers” and announcements for Discovery Kids and related programmes. Then my voice broke, and suddenly they only wanted my sister. Darn.
It was a fun experience, but ironically I remember more about setup the supporting staff had downstairs than the recording studio itself. As you walked into the refurbished Peranakan shophouse, you were confronted by rows of desks sporting Apple’s then-new series of colourful desktops. Every desk had a different flavour of iMac, and the professional mixers had Power Mac G3s with their gigantic blue CRTs. If you weren’t around in the late 1990s, it looked just as cool and optimistic as you’ve read.

I’d used various Macs at school for years by that point, but 1999 was also the year I got my first. It was shamelessly all about the enclosure; I saw the original Bondi Blue iMac in PC Magazine decrying it as a gimmick that couldn’t match whatever Pentium was around at the time, and I desperately wanted one! Seeing them at the studio only reinforced this, so my parents relented and bought me one for Christmas. The year had ended with some bad family news, so I suspect it was also to soften the blow.
I still have that iMac G3 DV (pictured below), though it has many problems from years of international moves smashing it up on the inside. That’s a story for another post. But we couldn’t justify the additional cost and size of the Blue and White Power Mac G3, despite it’s unconscionably cool door mechanism upon which its PowerPC motherboard was attached. I had the first page from a brochure of the machine on my wall for years, but that was it.
☕︎ ☕︎ ☕︎
My next experience with the Power Mac G3 came in Adelaide. It was 2006 and I saw someone was selling their old tower for almost nothing, and not that far from where I was studying. This was still in the age when Apple was still (a) a computer company, and (2) on the PowerPC architecture, so it surprised me. The Power Mac G5 cheesegraters were the hotness by that point, so I suppose the lowly G3 was considered passé. My room mate drove me over to pick it up, and soon I had a Mac OS 8.6 machine in my dorm.
That Power Mac G3 stuck with me for many years. It subsumed many of the original duties of my beloved original iMac, because it was so trivial to upgrade and tinker with. Sitting next to each other, a layperson would have been forgiven for thinking the iMac was the “monitor” for the G3. It was like they were siblings.

I eventually upgraded the machine with as much memory as I could, and a 400 MHz G3 CPU which massively improved its performance. I added a Gigabit Ethernet card, a Zip drive with the original Blue and White bezel, and a Sonnet IDE controller to bypass the problematic onboard controller these early revision machines had. In some ways the earlier beige Power Macintosh G3 was a better computer, but this G3 was like a souped-up version of that original iMac.
But I’m using the past tense here, as you likely spotted. I ran into money issues shortly after returning to Sydney from Singapore, and had to make some difficult decisions. The Power Mac G3 turned out to be quite the investment; I sold it for nearly ten times what I paid for it, even accounting for all the upgrades. It went to a musician in North Sydney who needed it for some old FireWire kit. I’m glad it went to a good place, but it was still heartbreaking.
So we come to the present. Last year someone in Lindfield was selling their immaculate Quicksilver Power Mac G4. I’d never owned one, but it retained many of the aspects of that classic G3 enclosure I’d loved for so many years. It’s basically the perfect Mac OS 9 machine, and it runs earlier Mac OS X and NetBSD/macppc great too. But at the risk of getting mushy, I still had a Blue and White Power Mac G3-sized hole in my nostalgic fool’s heart.
Within the same week a generous member of the NetBSD Sydney community gave me her incredible SGI Indigo2 machine, someone in the Sydney CBD was selling the exact model of Power Mac G3 I originally had, even down to the upgraded 400 MHz G3. Within a day of listing, Clara and I were on the train home with it on the seat next to us.

Would you look at that, they’re family :’).
It’s funny that this specific machine is largely in the same condition as that Power Mac G3 I picked up in 2006, save for a full compliment of memory. I’d love to get a Zip drive and bezel for it, and I’ll likely forgo the Sonnet IDE controller this time for a 64-bit SCSI interface, or even SATA card. The only major difference is the side panels are in significantly better cosmetic condition, which is nice.
I expect the Quicksilver will remain my primary classic Mac machine, but I am so happy to have it on the table here. That shade of blue is perhaps the most important colour from my childhood, right alongside… beige! Go figure.

It now also has its own page on the Retro Corner.
By Ruben Schade in Sydney, 2025-10-24.
This report was written by Dennis Onyeka as part of Google Summer of Code 2025.
This is the 2nd blog post about his work. If you have missed the first blog post please read Google Summer of Code 2025 Reports: Enhancing Support for NAT64 Protocol Translation in NetBSD.
Typical rules looks like:
map wm0 algo "nat64" 64:ff9b:2a4:: -> 192.0.2.33
map wm0 algo nat64 plen 96 64:ff9b::8c52:7903 <- 140.82.121.3
This tells NPF to translate outgoing IPv6 packets using the prefix
64:ff9b:2a4::/96, rewriting them to use the IPv4 address
192.0.2.33. When the packet returns and hits NPF, it
changes source from GitHub's IPv4 to GitHub's IPv6 address and then it
rewrites the header.
npf_nat64_rwrheader()) rewrites the IPv6 header to an IPv4 header.
github.com extracted from IPv4 embedded IPv6 address defined in the rule configuration. (e.g. 140.82.121.3 from 64:ff9b::8c52:7903).in4_cksum() or in6_cksum().npf_embed_ipv4()npf_nat64_rwrheader() routines).npf.conf(5) syntax and parser to accept NAT64 configuration parameters.ping, curl and dig, observing packets using tcpdump and Wireshark.This project successfully integrates NAT64 along with a separate DNS64 configuration into NPF, enabling IPv6-only clients to reach IPv4-only servers through seamless translation. Although there's a need for additional changes and implementation.
This is indeed the end of my GSoC Program, it was indeed an exciting moment working with system developers and certainly I'd be an active contributor to the NetBSD codebase.
Source code of the Google Summer of Code project can be found at the following branch.
This started as a train of thought on Mastodon last week, but it was one that I thought was worth mentioning here. I was surprised to realise recently that I have and maintain more installs of NetBSD than FreeBSD. I know, shocking right?
Admittedly this is as much the fault of FreeBSD’s useful features than NetBSD’s utility. I had a project in the last year to consolidate as many disparate services into a single FreeBSD bhyve and jail host at home, and a couple of cloud VMs for backup and orchestration. FreeBSD’s tooling and OpenZFS integration is what made this possible, and seamless. I’m a begrudging Linux guy at work, but I do encourage clients and internal teams to use FreeBSD as a viable alternative where possible, especially for storage.
On the other hand, you have different fingers. NetBSD runs on all my non-Mac laptops as my portable OS of choice, our home router with npf(7), a couple of VMs with chroot(8) environments for legacy stuff like FTP, and is my default dual-booted OS on every retro computer I get going again. pkgsrc also even runs on my Macs. Every time I bootstrap and install a new NetBSD machine, I remember why I fell in love with it the first time I got it running on my childhood iBook G3 all those years ago. About all I wish it had for new users was cgd(8) in the installer for encrypted drives; I’m working on building some scripts to automate this for my own installs that might be useful.
I’m sure BSD people will ask me what the point of NetBSD is, just as Linux people ask what the point of BSD is. Those people are missing out.
By Ruben Schade in Sydney, 2025-10-20.
I've just installed virt-manager with pkgin on NetBSD 9.2 just because I want to emulate the virtual machines with qemu + nvmm on NetBSD 9.2. The installation of virt-manager went ok. But,when I ran it,an error came up :
netbsd-marietto# virt-manager
Traceback (most recent call last):
File "/usr/pkg/share/virt-manager/virt-manager.py", line 386, in <module>
main()
File "/usr/pkg/share/virt-manager/virt-manager.py", line 247, in main
from virtManager import cli
File "/usr/pkg/share/virt-manager/virtManager/cli.py", line 29, in <module>
import libvirt
ImportError: No module named libvirt
Googling a little bit maybe I've found the solution here :
https://www.unitedbsd.com/d/285-linux-user-and-netbsd-enthusiast-hoping-to-migrate-some-day
where "kim" said :
Looking at pkgsrc/sysutils/libvirt/PLIST it doesn't look like the package provides any Python bindings -- which is what the "ImportError: No module named libvirt" error message is about. You could try py-libvirt from pkgsrc-wip and see how that works out.
I tried to start the compilation like this :
netbsd-marietto# cd /home/mario/Desktop/pkgsrc-wip/py-libvirt
netbsd-marietto# make
but I've got this error :
make: "/home/mario/Desktop/pkgsrc-wip/py-libvirt/Makefile" line 15: Could not find ../../wip/libvirt/buildlink3.mk
make: "/home/mario/Desktop/pkgsrc-wip/py-libvirt/Makefile" line 16: Could not find ../../lang/python/distutils.mk
make: "/home/mario/Desktop/pkgsrc-wip/py-libvirt/Makefile" line 17: Could not find ../../mk/bsd.pkg.mk
make: Fatal errors encountered -- cannot continue
If u want to see the content of the Makefile, it is :
gedit /home/mario/Desktop/pkgsrc-wip/py-libvirt/Makefile
#$NetBSD: Makefile,v 1.32 2018/11/30 09:59:40 adam Exp $
PKGNAME= ${PYPKGPREFIX}-${DISTNAME:S/-python//}
DISTNAME= libvirt-python-5.8.0
CATEGORIES= sysutils python
MASTER_SITES= https://libvirt.org/sources/python/
MAINTAINER= [email protected]
HOMEPAGE= https://libvirt.org/sources/python/
COMMENT= libvirt python library
LICENSE= gnu-lgpl-v2
USE_TOOLS+= pkg-config
.include "../../wip/libvirt/buildlink3.mk"
.include "../../lang/python/distutils.mk"
.include "../../mk/bsd.pkg.mk"
Can someone help me to fix the error ? very thanks.
I am trying to find a way to get built-in Bluetooth working on rpi3 with netbsd10.
Did anyone succeed at that?
It seems that a custom kernel config is needed as the drivers are not compiled in by default.
Does anyone here know how to do it?
Forgive the Betteridge’s Law of Headlines title, but it’s posed as a question because it’s a case of it depends.

When you’ve been blogging for a few years like me, you get regular questions from people asking how they can get started. Which is great! Some of the more intrepid or curious among you also note that my HTML source credits Hugo as the site generator, and naturally want to know if they should use it too.
My answer is always the same: think about the writing first, and the tool second. Aka, you should worry less about the tools you use, and more about getting your thoughts down. It’s easy to get mired in the intricacies of whatever CMS you’ve chosen, when really even a basic ClassicPress, Textpattern, or Micro blog would do the trick. Heck, even Mastodon has support for longform text. You can always move your words elsewhere if its something you want to pursue.
I guess then I’m really answering whether Hugo is worth considering. I can only speak for my own experience, but I’m assuming that’s what you’re here for.
I’ve been using Hugo as my static-site generator for more than a decade. It’s the only one I’ve used that has been able to scale to my now tens of thousands of posts without breaking a sweat. My budget Fedora/FreeBSD Ryzen 5700X, my NetBSD ThinkPad X230, my MacBook Air, and my aging Xeon homelab server can all preview or generate the site within 10-40 seconds, as can my cloud instance with a single vCPU core. Go, and by extension Hugo, are fast… especially when coming from the dozens of minutes spent waiting for Jekyll.
Hugo’s documentation and community are also excellent. I’ve never had to post a question in the forums because invariably someone has already asked it, and received assistance. @bep and @jmooring are worth their weight in gold with their patience and clear instructions.
But there are challenges. I won’t go into the nitty gritty of static versus server-side hosting here, save to mention that you’ll be using a text editor and running terminal commands. If that’s not your cup of tea, I get it. Textpattern would still be my recommendation for a server-side platform that’s easy to install and maintain without the “kitchen sink-itis” that comes with other modern blogging software.
Hugo’s specific challenges, at least for me, can be boiled down to Go’s somewhat confusing template syntax, and Hugo itself being a moving target at times.
Go’s templating system is not as intuitive as Liquid, which is what the Jekyll static site generator uses. This may not be an issue if you use a pre-baked Hugo theme, or only tweak existing ones for your blog. But for those looking to create their own themes, Hugo’s taxonomy system and layouts can be challenging, especially at first. It makes way more sense to me now that I’ve been learning Go and understand some of its data structures (such as slices), but there’s definitely a learning curve.
Hugo also has a reputation for breaking code changes. Newer releases regularly revise function calls and configuration syntax, so a theme you used or last updated a year ago likely won’t work today. The Hugo binary will warn you of pending changes when you rebuild your site, but this still means there’s a steeper ongoing cost to maintaining a current Hugo site than other CMS that have assumed backwards compatibility. Hugo is not a set-and-forget tool, unless you don’t update it.
In short, is a phrase with two words. I’d say Hugo is worth considering if you want to start blogging, provided you keep on top of changes, take the time to learn its syntax, and are fine with using a terminal to maintain it. I do sometimes miss having a database I can use to bulk update posts, but then I remember I’m not having to maintain a database. Irrespective of its aforementioned shortcomings, I also know that when I run that hugo command I’ll have an entire site done in the time it takes me to grind beans for a coffee. That’s worth a lot to me.
By Ruben Schade in Sydney, 2025-10-14.
Anyone have experience building NetBSD specifically for embedded devices?
I got this Risc-V tablet here, and it has some issues. It came a few weeks ago with Debian Bullseye installed. I am comfortable with Debian, but Bullseye was two releases ago, and even worse it is pinned to a 2021 snapshot. So, software are missing all those vital security patches. Firefox can't even install current extensions, because it is too out of date. I tried upgrading it to Sid three times, but each time the superblocks in the file system become corrupted. So now I am updating it to a 2023 snapshot, but still, that was two years ago, and the superblocks might still get corrupted in the process.
I was thinking about starting fresh with a NetBSD install, but it appears a custom kernel is going to be needed before that can happen. The Manufacturer put's out their own custom Linux kernel, and I imagine it includes some hardware specific modules, so everything is compatible. I also would like to incorporate a file system like F2FS or NilFS, one designed purposefully around embedded devices which operate off of EMMC drives.
It can't be that difficult to accomplish, having built NetBSD and OpenBSD a few times. If it is, I might just throw the tablet on Ebay and start over.
pkgsrc 2025Q3 is now available for NetBSD, this includes binary packages.
I have updated my NetBSD 10.1 packages using 2025Q3 from binary pkgs without any issues.
Jesse Smith at Distrowatch reviews NetBSD this week:
https://distrowatch.com/weekly.php?issue=20250922#netbsd
I have a good mix today.
Your unrelated music video of the week: Return of the Phantom by VOID. 2025 or 1985? Can’t easily tell.
This report was written by Dennis Onyeka as part of Google Summer of Code 2025.
The goal of the NAT64 project is to implement IPv6-to-IPv4 translation inside NPF (NetBSD Packet Filter). NAT64 enables IPv6-only clients to communicate with IPv4-only servers by embedding/extracting IPv4 addresses in IPv6 addresses as per RFC 6052 and RFC 6145. We are using a 1:1 mapping for now, to implement NAT64 translation. Whereby an IPv6 host will use its IPv4 address to communicate with an IPv4 only server. As an example of IPv4, we will use github.com (140.82.121.3) that supports only IPv4. In order to enable NAT64 on NPF we will have a rule like this:
map wm0 algo "nat64" 64:ff9b:2a4:: -> 192.0.2.33
This means we want to use the host IPv4 address associated to wm0 interface 192.0.2.33 to access the public internet in order to communicate with GitHub's IPv4 server.
During this process, IPv6 header will be rewritten to IPv4. Part of the IP structure requires source and destination address so our new IPv4 source address will be the Host IPv4 address (which is likely to change during further improvement) and the IPv4 destination address will be gotten from GitHub’s IPv4 embedded IPv6 address i.e. the IPv4 address embedded into IPv6 address (64:ff9b::8c52:7903) gotten from the DNS resolver.
Note that NPF is our router so we will have to enable and configure a DNS caching resolver like unbound on the NetBSD machine. The job of the DNS64 is to synthesize AAAA responses from IPv4-only A records using the well-known prefix (64:ff9b::/96), then embed the Github’s IPv4 address (140.82.121.3) into it which will be (64:ff9b::8c52:7903). The hexadecimal values represent Github’s IPv4 address.
When the packet is returning from GitHub, it uses the IPv6 interface, so the IPv4 address part will be embedded back into its required position based on the prefix length passed by the user on the configuration. Then it will be sent to the IPv6 interface of the host machine.
So far, I’ve been focusing on the core translation path, making sure headers are rewritten correctly and transport checksums are updated. This also requires changes in the userland and kernel.
npf.conf, so users can specify NAT64 prefix length.ping, nc (netcat), curl, dig.tcpdump/tshark/Wireshark on NetBSD to inspect packets before/after translation.build.sh with kernel debug logs enabledktraceExperience: Working deep inside NetBSD’s kernel networking stack has been challenging but rewarding. It gave me hands-on experience with mbufs, packet parsing, and kernel-level checksums.
Hi all,
I'm new here, and to *BSD, but I have an old DEC DS10 that I had my company buy for a project, which we later abandoned. So instead of letting it collect dust in a dank basement, I have a DS10 humming away in my office running NetBSD 9.3.
It took a while to get it installed, but now the setup seems quite content. The only issue I've bumped into, is that when booting, the process stops and asks for a terminal type, hit Enter for VT100, which then dumps me into a root prompt. To continue to boot, I have to "exit", after which it is up and running just fine.
Any idea how to stop this from happening, and have a nice, smooth, boot process?
There's nothing interesting in /var/log/messages, aside from 3 complaints about its pre-DS10-era display card. I don't think it'll help, but here's the complaints:
/netbsd: [ 1.0000000] pm3fb0 at pci0 dev 14 function 0: 3D Labs GLINT Permedia 3 (rev. 0x01)
/netbsd: [ 1.0000000] autoconfiguration error: pm3fb0: no width property
/netbsd: [ 1.0000000] autoconfiguration error: pm3fb0: no height property
/netbsd: [ 1.0000000] autoconfiguration error: pm3fb0: no depth property
Thanks for any/all help!
Spiff
So I'm curious about NetBSD and wondering if it is safe to use for a daily driver desktop. I was doing my research and notice that on the security mailing list there isn't a lot of activity. I already subscribe to the OpenBSD and FreeBSD mailing lists for security vulnerabilities having run both of those BSDs and they have multiple security vulnerabilities every few months. NetBSD hasn't had any this year and only two last year. Is the security really that good or is no one looking for bugs? I mean I work as a system admin and my work's windows server and Red Hat Linux systems get DOZENS of bug fixes / security fixes each and every month!
Thank you for your time and I hope you don't think I am attacking NetBSD, I'm just curious why the "crickets chirping" is found on the security mailing list about any bugs found?
Also, when bugs are found is there any way to patch except for recompiling the entire OS?
Thanks in advance!
This year, 2025, the KDE Community held its yearly conference in Berlin, Germany. On the way I reinstalled FreeBSD on my Frame.work 13 laptop in another attempt to get KDE Plasma 6 Wayland working. Short story: yes, KDE Plasma 6 Wayland on FreeBSD works.
↫ Adriaan de Groot
Adriaan de Groot is a long-time KDE developer and FreeBSD package maintainer, and he’s published a short but detailed guide on setting up a KDE Plasma desktop on FreeBSD using Wayland instead of X11. With the Linux world slowly but finally leaving X11 behind, the BSD world really has little choice but to follow, especially if they want to continue offering the two major desktop environments. Most of KDE and GNOME are focused on Linux, and the BSDs have always kind of tagged along for the ride, and over the coming years that’s going to mean they’ll have to invest more in making Wayland run comfortably on BSD.
Of course, the other option would be the KDE and GNOME experience on the BSDs slowly degrading over time, but I think especially FreeBSD is keen to avoid that fate, while OpenBSD and NetBSD seem a bit more hands-off in the desktop space. FreeBSD is investing heavily in its usability as a desktop operating system, and that’s simply going to mean getting Wayland support up to snuff. Not only will KDE and GNOME slowly depend more and more on Wayland, Xorg itself will also become less maintained than it already is.
Sometimes, the current just takes you where it’s going.
I am a huge fan of my Rock 5 ITX+. It wraps an ATX power connector, a 4-pin Molex, PoE support, 32 GB of eMMC, front-panel USB 2.0, and two Gen 3×2 M.2 slots around a Rockchip 3588 SoC that can slot into any Mini-ITX case. Thing is, I never put it in a case because the microSD slot lives on the side of the board, and pulling the case out and removing the side panel to install a new OS got old with a quickness.
I originally wanted to rackmount the critter, but adding a deracking difficulty multiplier to the microSD slot minigame seemed a bit souls-like for my taste. So what am I going to do? Grab a microSD extender and hang that out the back? Nay! I’m going to neuralyze the SPI flash and install some Kelvin Timeline firmware that will allow me to boot and install generic ARM Linux images from USB.
↫ Interfacing Linux
Using EDK2 to add UEFI to an ARM board is awesome, as it solves some of the most annoying problems of these ARM boards: they require custom images specifically prepared for the board in question. After flashing EDK2 to this board, you can just boot any ARM Linux distribution – or Windows, NetBSD, and so on – from USB and install it from there. There’s still a ton of catches, but it’s a clear improvement.
The funniest detail for sure, at least for this very specific board, is that the SPI flash is exposed as a block device, so you can just use, say the GNOME Disk Utility to flash any new firmware into it. The board in question is a Radxa ROCK 5 ITX+, and they’re not all that expensive, so I’m kind of tempted here. I’m not entirely sure what I’d need yet another computer for, honestly, but it’s not like that’s ever stopped any of us before.
This report was written by Ethan Miller as part of Google Summer of Code 2025.
The goal is to improve the capabilities of asynchronous IO within NetBSD. Originally the project espoused a model that pinned a single worker thread to each process. That thread would iterate over pending jobs and complete blocking IO. From this, the logical next step was to support an arbitrary number of worker threads. Each process now has a pool of workers recycled from a freelist, and jobs are grouped per-file so that we do not thrash multiple threads on the same vnode which would inevitably lock. This grouping also opens the door for future optimisations in concurrency. The guiding principle is to keep submission cheap, coalesce work sensibly, and only spawn threads when the kernel would otherwise block.
We pin what is referred to as a service pool to each process, with each service pool capable of spawning and managing service threads. When a job is enqueued it is distributed to its respective service thread. For regular files we coalesce jobs that act on the same vnode into one thread. If we fall back to the synchronous IO path within the kernel it would lock anyway, but this approach is prudent because if more advanced concurrency optimisations such as VFS bypass are implemented later this is precisely the model that would be required. At present, since that solution is not yet in place, all IO falls back to the synchronous pipeline. Even so there are performance gains when working with different files, since synchronous IO can still run on separate vnodes at the same time.
Through the traditional VFS read/write path, requests eventually reach
bread/bwrite and block upon a cache miss until completion. This kills
concurrency. I considered a solution that bypassed the normal vnode
read/write path by translating file offsets to device LBAs with VOP_BMAP,
constructing block IO at the buffer and device layer, submitting with
B_ASYNC, and deferring the wait to the AIO layer with biodone bookkeeping
instead of calling biowait at submission. This keeps submission short and
releases higher level locks before any device wait. The assumptions are
that filesystem metadata is frequently accessed therefore cached so
VOP_BMAP usually does not block, that block pointers for an inode mostly
remain stable for existing data, and that truncation does not rewrite past
data. For the average case this would provide concurrency on the same file.
In practice, however, it was exceptionally difficult to implement because
the block layer lacks the necessary abstractions.
This is, however, exactly the solution espoused by FreeBSD, and they make it work well because struct bio is an IO token independent of the page and buffer cache. GEOM can split or clone a bio, queue them to devices, collect child completions, and run the parent callback. Record locks are treated as advisory so once a bio is in flight the block layer completes it even if the advisory state changes. NetBSD has no equivalent token. Struct buf is both a cache object and an IO token tied to UBC and drivers through biodone and biowait. For now the implementation of service pools and service threads lays the groundwork for asynchronous IO. Once the BIO layer reaches adequate maturity, integrating a bio-like abstraction will be straightforward and yield immediate improvements for concurrency on the same vnode. The logical next step is to design and port something comparable to FreeBSDs struct bio which would map very cleanly onto the current POSIX AIO framework.
My development setup is optimised for building and testing quickly. I use
scripts to cross-build the kernel and boot it under QEMU with a small FFS
root. The kernel boots directly with the QEMU option -kernel without any
supporting bootloader. Early on I tested against a custom init dropped onto
an FFS image. Now I do the same except init simply launches a shell which
allows me to run ATF tests without a full distribution. This makes it
possible to compile a new kernel and run tests within seconds.
One lesson I have taken away is that progress never happens overnight. It takes enormous effort to get even a few thousand lines of highly multi-threaded race-prone code to behave consistently under all conditions. Precision in implementation is absolutely required. My impression of NetBSD is that it is a fascinating project with an abundance of seemingly low-hanging fruit. In reality none of it is truly low-hanging or simple, but compared to Linux there remains a great deal of work to be done. It is not easy work but the problems are visible and the path forward is clearer.
I also want to note that I intend on providing long term support for this code in the case that any issues may arise.
The code written as part of this project can be found here.
I have a Raspberry Pi 3 with NetBSD 10, running CI jobs. Because SD cards are notoriously unreliable, I attached a USB hard drive to it. The HDD has a swap partition and scratch space for the builds, while root is on the SD. Unfortunately, some writes end up going to the root file system after all, which meant that the SD card was destroyed after only about a year!
Earlier this year, I was trying to get actual daily work done on HP-UX 11.11 (11i v1) running on HP’s last and greatest PA-RISC workstation, the HP c8000. After weeks of frustration caused first by outdated software no longer working properly with the modern web, and then by modern software no longer compiling on HP-UX 11.11, I decided to play the ace up my sleeve: NetBSD’s pkgsrc has support for HP-UX. Sadly, HP-UX is obviously not a main platform or even a point of interest for pkgsrc developers – as it should be, nobody uses this combination – so various incompatibilities and more modern requirements had snuck into pkgsrc, and I couldn’t get it to bootstrap. I made some minor progress here and there with the help of people far smarter than I, but in the end I just lacked the skills to progress any further.
This story will make it to OSNews in a more complete form, I promise.
Anyway, in May of this year, it seems Brian Robert Callahan was working on a very similar problem: getting pkgsrc to work properly on IBM’s AIX.
The state of packages on AIX genuinely surprises me. IBM hosts a repository of open source software for AIX. But it seems pretty sparse compared to what you could get with pkgsrc. Another website offering AIX packages seems quite old. I think pkgsrc would be a great way to bring modern packages to AIX.
I am not the first to think this. There are AIX 7.2 pkgsrc packages available at this repository, however all the packages are compiled as 32-bit RISC System/6000 objects. I would greatly prefer to have everything be 64-bit XCOFF objects, as we could do more with 64-bit programs. There also aren’t too many packages in that repository, so I think starting fresh is in our best interest.
As we shall see, this was not as straightforward as I would have hoped.
↫ Brian Robert Callahan
Reading through his journey getting pkgsrc to work properly on AIX, I can’t help but feel a bit better about myself not being to get it to work on HP-UX 11.11. Callahan was working with AIX 7.2 TL4, which was released in November 2019 and still actively supported by IBM on a maintained architecture, while I was working with HP-UX 11.11 (or 11i v1), which last got some updates in and around 2005, running on an architecture that’s well dead and buried. Looking at what Callahan still had to figure out and do, it’s not surprising someone with my lack of skill in this area couldn’t get it working.
I’m still hoping someone far smarter than I stumbles upon a HP c8000 and dives into getting pkgsrc to work on HP-UX, because I feel pkgsrc could turn an otherwise incredibly powerful HP c8000 from a strictly retro machine into something borderline usable in the modern world. HP-UX is much harder to virtualise – if it’s even possible at all – so real hardware is probably going to be required. The NetBSD people on Mastodon suggested I could possibly give remote access to my machine so someone could dive into this, which is something I’ll keep under consideration.
I have this code:
NSData* data = [[pipe fileHandleForReading] readDataToEndOfFile];
NSString* s = [[NSString alloc] initWithData: data encoding: NSUTF8StringEncoding];
(For context, this is part of a function that executes system commands.)
And this last line produces a warning:
Calling [GSPlaceholderString -initWithData:encoding:] with incorrect signature. Method has @28@0:8@16i24 (@28@0:8@16i24), selector has @28@0:8@16I24
The command works fine, and gives me the output I want, whereby I don't want to see this warning, because I am working on a console application, and these warnings severely disrupt the experience.
Is there any way to disable those warning messages? I tried to "#define NSLog(...)", but it didn't work.
I am using the GCC compiler on a NetBSD 10.0.
I'm trying to set up a VM running NetBSD 6.0 (very unfortunately I have to use this specific version). While I've managed to find the installer ISO very easily, I can't find a single working pkgin / pkgsrc mirror, seems that they've all been removed.
Is anyone aware of any remaining mirrors of that ancient versions, or some other method of installing software packages (mind that they need to be the same versions that shipped with NetBSD 6.0)?
We removed ads from OSNews. Donate to our fundraiser to ensure our future!
The Hurd, the collection of services that run atop the GNU Mach microkernel, has been in development for a very, very long time. The Hurd is intended to serve as the kernel for the GNU Project, but with the advent of Linux and its rapid rise in popularity, it’s the Linux kernel that became the defacto kernel for the GNU Project, a combination we’re supposed to refer to as GNU/Linux. Unless you run Alpine, of course. Or you run any other modern Linux distribution, which probably contains more non-GNU code than it does GNU code, but I digress.
The Hurd is still in development, however, and one of the more common ways to use The Hurd is by installing Debian GNU/Hurd, which combines the Debian package repositories with The Hurd. Debian GNU/Hurd 2025 was released yesterday, and brings quite a few large improvements and additions.
This is a snapshot of Debian “sid” at the time of the stable Debian “Trixie” release (August 2025), so it is mostly based on the same sources. It is not an official Debian release, but it is an official Debian GNU/Hurd port release.
↫ Samuel Thibault
About 72% of the Debian archive is available for Debian GNU/Hurd, for both i386 and amd64. This indeed means 64bit support is now available, which makes use of the userland disk drivers from NetBSD. Support for USB disks and CD-ROM was added, and the console now uses xkb for keyboard layout support. Bigger-ticket items are working SMP support and a port of the Rust programming language. Of course, there’s a ton more changes, fixes, improvements, and additions as well.
You can either install Debian GNU/Hurd using the Debian installer, or download a pre-installed disk image for use with, say, qemu.
The long-planned next meeting of NYCBUG is tomorrow. If you are going and have a Framework laptop, please bring it for testing HDMI. I assume it’s related to ongoing support work.
Meeting is canceled cause no presenter available.
If you have been following source-changes, you may have noticed the creation of the netbsd-11 branch! It comes with a lot of changes that we have been working on:
Compatibility support code, like 32bit on 64bit machines, has been separated into special sets, to allow easy installation of machines that do not need to be able to run 32bit code.
Install media for some architectures has been split in small ("CD/R") images (w/o debug and compat sets), and full ("DVD-R") sets. This is also useful on hardware that came with a CD drive (instead of a DVD drive) and can not boot from a USB stick.
Manual pages come in two flavors, html and mandoc. Both have now their own sets, so one or the other can easily be left out of an installation.
All mac68k and macppc ISO images are now bootable.
... that we always forget to mention. The list of changes can be found in the beta build, split into changes upto the creation of the branch and changes pulled up to the branch before the 11.0 release.
A few work-in-progress items unfortunately did not make it into this branch and will not be pulled up. The most missed ones are:
These will happen in HEAD now carefully and after stabilization might be a good reason to create the next major branch earlier.
We try to test NetBSD as best as we can, but your testing can help to make NetBSD 11.0 a great release. Please test it and let us know of any bugs you find.
Binaries are available on NetBSD daily builds and for various ARM based devices (with board dependent boot setup) on arm install images.
Please test NetBSD 11.0_BETA on your hardware and report any bugs you find!
No promises, but we will try to make this one of the shortest release cycles ever...
Ideally we will be in release candidate state at EuroBSDCon late in September, and cut the final release early in October.
pkg_admin audit said..
Package sqlite3-3.49.2 has a memory-corruption vulnerability, see https://nvd.nist.gov/vuln/detail/CVE-2025-6965
Package libxml2-2.14.4 has a use-after-free vulnerability, see https://nvd.nist.gov/vuln/detail/CVE-2025-49794
Package libxml2-2.14.4 has a denial-of-service vulnerability, see https://nvd.nist.gov/vuln/detail/CVE-2025-49795
Package libxml2-2.14.4 has a denial-of-service vulnerability, see https://nvd.nist.gov/vuln/detail/CVE-2025-49796
Package libxml2-2.14.4 has a integer-overflow vulnerability, see https://nvd.nist.gov/vuln/detail/CVE-2025-6021
Package libxml2-2.14.4 has a buffer-overflow vulnerability, see https://nvd.nist.gov/vuln/detail/CVE-2025-6170
I usually go to netbsd site, check errata and follow instruction but..
https://www.netbsd.org/support/security/patches-10.1.html
The page result empty. How to fix/patch those vulnerabilities?
I have also run
sysupgrade auto
and reboot, but messages still appear.
For several years our autobuild cluster at Columbia University has been providing our CI and produced many official release builds.
Over the years, with new compiler versions and more targets to build, the build times (and space requirements) grew. A lot. A few weeks ago the old cluster needed slightly more then nine and a half hours for a full build of -current.
So it was time to replace the hardware. At the same time we moved to another colocation with better network connectivity.
But not only did the hardware age, the CI system we use (a bunch of shell scripts) needed a serious overhaul.
As reported in the last AGM, riastradh@ did most of the work to make the scripts able to deal with mercurial instead of cvs.
This is a necessary step in the preparation of our upcoming move away from cvs, which is now completed. While the build cluster currently (again) runs from cvs, we completed a few full builds from mercurial in the last few weeks.
The cvs runs for builds were purely time driven. To deal with latency between cvs.NetBSD.org and anoncvs.NetBSD.org there was a 10-minute lag built into the system, so all timestamps (the names of the build directories) were exact to the minute only and had a trailing zero.
This does not fit well with modern distributed source control systems. For a hg or git build the cluster fetches the repository state from the master repository and updates the checkout directory to the latest state of a branch (or HEAD). So the repository state dictates the time stamp for the build, not vice versa as we did for cvs.
As a result there is a difference between the time a build is started (wall clock) and the time of the last change of the current branch in the repository (build time or reproducible ID). You can see both times in the status page. Just right now it displays:
Currently building tag HEAD-lint, build 2025-07-22 19:19:02 UTC (started at: 2025-07-22 19:26:40 UTC). No results yet.
And, obviously, we can now easily skip builds when nothing has changed - which happens often on old branches, but also may happen when all HEAD builds are done before anything new has been committed. Hurry up, you lazy developers!
While we are still running from cvs, the process of checking if anything has changed is quite slow (in the order of five minutes). So you may notice a status display like:
Currently searching for source changes...while the "cvs update" is crawling. When running from hg (or git) this will be lot faster.
The new cluster consists of four build machines, each equipped with a dual 16 core EPYC CPU and 256 GB of RAM. As a "brain" we have an additional controller node with 32 GB RAM and a (somewhat weaker) Intel CPU. The controller node creates all source sets, does repository operations, and distributes sources but does not perform any building itself.
On the build machines we run each 8 builds in parallel, and each build with low parallelism (-j 4). This tries to saturate all cpus during most of the time of the overall build. Individual builds have several phases with little/reduced possibility of parallelism, e.g., initially during configure runs, or while linking llvm components, or near the end when compressing artefacts. The goal is not to run a single (individual) build as fast as possible, but the whole set of them in as little time overall.
The head of the result page create for each build now tries to give all the details necessary to recreate this particular build, e.g.:
From source dated Tue Jul 22 04:45:13 UTC 2025 Reproducible builds timestamp: 1753159513 Repository: src@cvs:HEAD:20250722044513-xsrc@cvs:HEAD:20250720221743
The two timestamps are the same, one in seconds since the Unix epoch, the other human readable.
The repository ID is, for cvs, again based on timestamps. But for hg or git you will see the typical commit hash values here.
We considered using an off-the-shelf CI system and customizing it to our needs instead of moving the fully custom build system forward. We also talked to FreeBSD release engineering about it and asked for their experience. Overall the work for customizing an off-the-shelf solution looked roughly equivalent to the work needed now to modify the existing working solution, and we saw no huge benefit for the future picking either of them.
First and foremost, we can quickly verify if all of our supported branches indeed compile. While developers are supposed to commit only tested changes, usually they build at most one branch and architecture. But sometimes changes have unforeseen consequences, for example an install image of an exotic architecture growing more in size than expected.
Then, in theory, no NetBSD user has to waste any CPU cycles in order to install (for example) NetBSD-current or the latest NetBSD 9 tree. Instead you can simply download an image and install from that — no matter if your architecture is amd64 or a slightly more exotic one (but you will of course always be able to download the source tree and build that if that suits you better).
Note that a supported architecture is not just supported at one point in time and then as time progresses might start collecting dust and not compile anymore, making it hard to merge all current changes back into that tree. Instead once an architecture is supported, our CI system will without rest build that architecture as one of the many we support. Take the rather new Wii port as an example. Now that it is supported, you can at least for the foreseeable future download the latest and greatest NetBSD release, populate an SD card with the image and boot it. Now, in half a year, and in 10 years (and even further down the line).
We are grateful to Two Sigma Investments, LP for providing the space and connectivity for the new build cluster.
I preassembled this list of links over time, so some of them have probably changed. For the “I’m sorry…” link, that just means more material.
In this blog post, I have described how I have been using Linux on my Amiga 4000. I hope this information can be of use to anyone who is planning to run Linux on their Amigas. Furthermore, I hope that the existing documentation on the Linux/m68k homepage gets updated at some point. May be the information in this blog post can help with that.
Debian 3.1 works decently for me, but everything is much slower compared to a PC. This is not really surprising if you run an operating system (last updated in 2008) on a CPU from the early 90s that still runs at a 25 MHz clock speed :).
↫ Sander van der Burg
The blog post in question is from January of this year, but as soon as I saw it I knew I had to post it here. It’s an incredibly intricate and detailed guide to running Linux on a 25Mhz Amiga 4000, including X11, networking, internet access, file sharing, and so, so much more – up to running Linux for Amiga inside FS-UAE. There’s so much love and dedication in this detailed guide, and I love it.
In fact, Van den Burg has a similar article about running NetBSD on the Amiga 4000, with the same level of detail, dedication, and information density. A fun note is that while X11 for Linux on the Amiga can’t seem to make use of the Amiga chipset, the X Window System on NetBSD does make us of it. I’m not surprised.
Articles like these are useful only for a very small number of people, but having this amount of knowledge concentrated like this will prove invaluable like five years from now when someone else finds an Amiga 4000 in their attic or at a yard sale, and choose to go down this same path. We need more of these kinds of write-ups.
RPG mini-theme this week.
Your unrelated music link of the week: Beanbag metal.
So to set the stage (and it's a strange stage...) this is on NetBSD, using pkgsrc, and on hppa architecture. I build this the same way on sparc without issue.
On hppa, I get:
`_ZL17__gthread_triggerv' referenced in section `.rodata._ZN20cmBasicUVPipeIStreamIcSt11char_traitsIcEED1Ev.cst4' of cmBinUtilsMacOSMachOOToolGetRuntimeDependenciesTool.o: defined in discarded section `.text._ZL17__gthread_triggerv[_ZN20cmBasicUVPipeIStreamIcSt11char_traitsIcEED1Ev]' of cmBinUtilsMacOSMachOOToolGetRuntimeDependenciesTool.o
`_ZL17__gthread_triggerv' referenced in section `.rodata._ZN20cmBasicUVPipeIStreamIcSt11char_traitsIcEED1Ev.cst4' of cmBinUtilsWindowsPEDumpbinGetRuntimeDependenciesTool.o: defined in discarded section `.text._ZL17__gthread_triggerv[_ZN20cmBasicUVPipeIStreamIcSt11char_traitsIcEED1Ev]' of cmBinUtilsWindowsPEDumpbinGetRuntimeDependenciesTool.o
`_ZL17__gthread_triggerv' referenced in section `.rodata._ZN20cmBasicUVPipeIStreamIcSt11char_traitsIcEED1Ev.cst4' of cmBinUtilsWindowsPEObjdumpGetRuntimeDependenciesTool.o: defined in discarded section `.text._ZL17__gthread_triggerv[_ZN20cmBasicUVPipeIStreamIcSt11char_traitsIcEED1Ev]' of cmBinUtilsWindowsPEObjdumpGetRuntimeDependenciesTool.o
`_ZL17__gthread_triggerv' referenced in section `.rodata._ZNSt16_Sp_counted_baseILN9__gnu_cxx12_Lock_policyE1EE10_M_releaseEv.cst4' of cmConditionEvaluator.o: defined in discarded section `.text._ZL17__gthread_triggerv[_ZNSt16_Sp_counted_baseILN9__gnu_cxx12_Lock_policyE1EE10_M_releaseEv]' of cmConditionEvaluator.o
`_ZL17__gthread_triggerv' referenced in section `.rodata._ZNSt16_Sp_counted_baseILN9__gnu_cxx12_Lock_policyE1EE10_M_releaseEv.cst4' of cmExecuteProcessCommand.o: defined in discarded section `.text._ZL17__gthread_triggerv[_ZNSt16_Sp_counted_baseILN9__gnu_cxx12_Lock_policyE1EE10_M_releaseEv]' of cmExecuteProcessCommand.o
`_ZL17__gthread_triggerv' referenced in section `.rodata._ZNSt16_Sp_counted_baseILN9__gnu_cxx12_Lock_policyE1EE10_M_releaseEv.cst4' of cmFindPackageCommand.o: defined in discarded section `.text._ZL17__gthread_triggerv[_ZNSt16_Sp_counted_baseILN9__gnu_cxx12_Lock_policyE1EE10_M_releaseEv]' of cmFindPackageCommand.o
`_ZL17__gthread_triggerv' referenced in section `.rodata._ZN17cmFunctionBlockerD2Ev.cst4' of cmForEachCommand.o: defined in discarded section `.text._ZL17__gthread_triggerv[_ZN17cmFunctionBlockerD5Ev]' of cmForEachCommand.o
`_ZL17__gthread_triggerv' referenced in section `.rodata._ZNSt16_Sp_counted_baseILN9__gnu_cxx12_Lock_policyE1EE10_M_releaseEv.cst4' of cmGeneratorExpressionDAGChecker.o: defined in discarded section `.text._ZL17__gthread_triggerv[_ZNSt16_Sp_counted_baseILN9__gnu_cxx12_Lock_policyE1EE10_M_releaseEv]' of cmGeneratorExpressionDAGChecker.o
`_ZL17__gthread_triggerv' referenced in section `.rodata._ZN20cmBasicUVPipeIStreamIcSt11char_traitsIcEED1Ev.cst4' of cmLDConfigLDConfigTool.o: defined in discarded section `.text._ZL17__gthread_triggerv[_ZN20cmBasicUVPipeIStreamIcSt11char_traitsIcEED1Ev]' of cmLDConfigLDConfigTool.o
`_ZL17__gthread_triggerv' referenced in section `.rodata._ZN20cmBasicUVPipeIStreamIcSt11char_traitsIcEED1Ev.cst4' of cmPlistParser.o: defined in discarded section `.text._ZL17__gthread_triggerv[_ZN20cmBasicUVPipeIStreamIcSt11char_traitsIcEED1Ev]' of cmPlistParser.o
`_ZL17__gthread_triggerv' referenced in section `.rodata._ZNSt16_Sp_counted_baseILN9__gnu_cxx12_Lock_policyE1EE10_M_releaseEv.cst4' of cmake.o: defined in discarded section `.text._ZL17__gthread_triggerv[_ZNSt16_Sp_counted_baseILN9__gnu_cxx12_Lock_policyE1EE10_M_releaseEv]' of cmake.o
`_ZL17__gthread_triggerv' referenced in section `.rodata._ZN20cmBasicUVPipeIStreamIcSt11char_traitsIcEED1Ev.cst4' of cmcmd.o: defined in discarded section `.text._ZL17__gthread_triggerv[_ZN20cmBasicUVPipeIStreamIcSt11char_traitsIcEED1Ev]' of cmcmd.o
make: *** [Makefile:2: cmake] Error 1
It actually compiles everything just fine, it seems like right when it gets to creating "cmake" the binary it fails like this.
I see some other posts on here talking about changing the order in the linker section ?
I've posted to NetBSD mailing lists, as well as the CMake Discourse forum and no replies yet.
Any ideas would be greatly appreciated.
I’m on the road as I type this – though I’ll be back by the time it’s posted – and so the links are without much comment.
Your unrelated comics link of the week: What’s the best comic I’ve ever read? Lynda Barry is a master of the form. (via)
[I got redirected here from StackOverflow as this is not a programming question (oops).]
For about the last three years or so, under X11, Alt+KP_[n] has yielded a different keycode/keysym (set?) than Alt+[n]. I have been using this difference to change fonts on the fly on urxvt. Checking the version of urxvt shows that the version has not changed since 2021...
rxvt-unicode (urxvt) v9.26 - released: 2021-05-14
...so, what has changed in X11 to cause this, and why has this (been) changed? Or, conversely, is there anything I can do in any of my configurations (xmodmap, etc) to re-enable this behaviour? I can't really switch to having Alt+[n] accomplish this, as I use Alt+[n] as digit-argument in bash (via .inputrc).
I would really like to get the old behaviour of Alt+KP_[n] back. KP_7 shows as a different keystroke under xev than does 7, even though it shows the same output value
This is happening on X11 under both cygwin and NetBSD.
Excerpt of xev(X11) output:
root 0x36f, subw 0x0, time 2440468, (174,166), root:(2818,273),
state 0x10, keycode 64 (keysym 0xffe9, Alt_L), same_screen YES,
XLookupString gives 0 bytes:
XmbLookupString gives 0 bytes:
XFilterEvent returns: False
KeyPress event, serial 33, synthetic NO, window 0xc00001,
root 0x36f, subw 0x0, time 2441828, (174,166), root:(2818,273),
state 0x18, keycode 79 (keysym 0xffb7, KP_7), same_screen YES,
XLookupString gives 1 bytes: (37) "7"
XmbLookupString gives 1 bytes: (37) "7"
XFilterEvent returns: False
KeyRelease event, serial 33, synthetic NO, window 0xc00001,
root 0x36f, subw 0x0, time 2441906, (174,166), root:(2818,273),
state 0x18, keycode 79 (keysym 0xffb7, KP_7), same_screen YES,
XLookupString gives 1 bytes: (37) "7"
XFilterEvent returns: False
KeyRelease event, serial 33, synthetic NO, window 0xc00001,
root 0x36f, subw 0x0, time 2442734, (174,166), root:(2818,273),
state 0x18, keycode 64 (keysym 0xffe9, Alt_L), same_screen YES,
XLookupString gives 0 bytes:
XFilterEvent returns: False
KeyPress event, serial 33, synthetic NO, window 0xc00001,
root 0x36f, subw 0x0, time 2445171, (174,166), root:(2818,273),
state 0x10, keycode 64 (keysym 0xffe9, Alt_L), same_screen YES,
XLookupString gives 0 bytes:
XmbLookupString gives 0 bytes:
XFilterEvent returns: False
KeyPress event, serial 33, synthetic NO, window 0xc00001,
root 0x36f, subw 0x0, time 2445390, (174,166), root:(2818,273),
state 0x18, keycode 16 (keysym 0x37, 7), same_screen YES,
XLookupString gives 1 bytes: (37) "7"
XmbLookupString gives 1 bytes: (37) "7"
XFilterEvent returns: False
KeyRelease event, serial 33, synthetic NO, window 0xc00001,
root 0x36f, subw 0x0, time 2445468, (174,166), root:(2818,273),
state 0x18, keycode 16 (keysym 0x37, 7), same_screen YES,
XLookupString gives 1 bytes: (37) "7"
XFilterEvent returns: False
KeyRelease event, serial 33, synthetic NO, window 0xc00001,
root 0x36f, subw 0x0, time 2445984, (174,166), root:(2818,273),
state 0x18, keycode 64 (keysym 0xffe9, Alt_L), same_screen YES,
XLookupString gives 0 bytes:
XFilterEvent returns: False
TRIED: In a urxvt, I pressed Alt+KP_7 (sub any KP_ for 7, it makes no difference).
EXPECTED: I expected, as had happened before, the font to change, by virtue of Alt+KP_7 not being interpreted as the same keystroke as Alt+7.
ACTUAL RESULT: bash prompted (arg: 7), as though I had pressed Alt+7.