That's something annoying while using NetBSD. When I run
mount_cd9660 /dev/cd0a /mnt
And I want to browse /mnt graphically, I need to use PCManFM.
Caja (MATE's file manager) and Thunar (Xfce's file manager) both just try to load the directory indefinitely. Only PCManFM can actually browse it.
Why is that?
And also why is the mouse sensitivity so broken in 10.0? I've installed 9.0 for some testing today and I was shocked to see the mouse speed being normal.
Hello, I understand that in order to install NetBSD I need to disable Secure Boot. But is there a way to run NetBSD with Secure Boot enabled? If so, how? I tried to search interwebs for some information, but could not find any
Hi.
One of the things that are irritating about using NetBSD as a daily driver OS is the sound.
Unlike all other operating systems I've used, NetBSD doesn't seem to use pulseaudio, ALSA, OSS, or anything standardized, and instead uses its own thing.
This means that graphical volume control sliders don't work, and neither does the revered pavucontrol.
And instead I have to use the mixerctl command in the terminal which is not always handy.
Some might say "just write your own slider on top of it".
The thing is that there is a bunch of different "master" output "wires" (so to say), and only trial and error can show you the one you need to modify (master2 in my case).
And this custom audio system NetBSD uses isn't supported by SimpleScreenRecorder, which means that recording the screen with audio is not a possibility.
Is there a way to somehow use a different audio system as default?
What config options, using GENERIC as a base, are required to build a NetBSD 10.1 kernel, with legacy i915 DRM support?
I am currently un-commenting the i915drm driver and commenting out all *drmkms drivers (and their framebuffer counter parts).
After config, make depend, and make I am getting all kinds of undefined references to things like drm_debug_flag, drm_ioremap, drm_remapfree, etc.
This is the same result, even if I leave the *drmkms drivers and there framebuffers enabled.
Any Ideas?
guys is netbsd comes with a desktop envirement or text env ?
When trying to load up 'winecfg' wineboot fails to load, and an unpopulated .wine is created.
I think it is having an issue getting CPU information from /proc.
Other than the initial sysctl alterations, that the package suggests post installation, is something else yet required?
Edit:This persists with a local pkgsrc build, as well.
I haven't used NetBSD, since the days of user-mode setting. So, I'm a bit out-of-the-loop on the current state of things.
I have NetBSD setup, with the default KMS, and my GPU is detected by Mesa.
'glxinfo|grep renderer' does indeed list my i915.
However, all glx demos end in a segfault.
Is the i915 a bit too old, for current 3D support, in NetBSD? I've debated rebuilding the kernel, for user-mode setting, and trying xf86-video-intel.
I have tried using different accel methods, by creating an xorg.conf, and that didn't solve anything.
On a possitive note, software rendering works fine (no glxgears segfault).
hello there!
I was wondering if any of you guys has been able to compile NetBSD 10 so it runs on a Zynq 7000.
I’ve followed the normal path to build it using a copy from GENERIC from evbarm , edited the console and others, and commented the lines of systems I won’t use..
the thing is if I comment all that I don’t use, build .sh fails, complaining about orphan modules.
I can live with more modules compiled than needed, but here’s the catch,
there is an entry for an arm generic timer.
if you comment it or remove it, the system will not compile.
if you leave it, the the cup will try to use it and throw a kernel panic
just wondering if anyone have had similar issues?
best regards
Hi,
I’m trying to enable console screen blanking on a NetBSD 10.1 (no X11).
The machine is an Asus Eee PC 701 using the intelfb framebuffer driver.
The relevant lines from /var/run/dmesg.boot are:
intelfb0: framebuffer at 0xd0009000, size 800x400, depth 32, stride 3200
wsdisplay0 at intelfb0 kbdmux 1: console (default, vt100 emulation), using wskbd0
wsmux1: connecting to wsdisplay0
I’m trying to use screenblank to automatically turn off the screen after inactivity, but I get:
screenblank: No framebuffer devices, exiting
Has anyone successfully enabled console screen blanking with intelfb ?
Hi to All! Sorry for my bad English...
NetBSD asus.localnet 11.99.4 NetBSD 11.99.4 (GENERIC) #0: Sun Nov 23 18:50:33 UTC 2025 [email protected]:/usr/src/sys/arch/amd64/compile/GENERIC amd64
Some time ago was all OK with network printing on Epson M200 printer after compile its driver from src. And cupsd & cups_browsed were successfully started after install of "cups cups-pdf cups-filters (included cups_browsed)" and copying daemons files "cupsd cups-browsed [and probably ahavidaemon]" from /usr/pkg/share/examples/rc.d to /etc/rc.d. And after that edit /etc/rc.conf: "cupsd=YES & cups_browsed=YES". Probably the package cups-browsed now was installed manually, but not by cups-filters.
But after restart NetBSD now cups_browsed doesn't start, so I have a troubles to detect and print with nettwork printer Epson M200 after configure localhost:631/admin
How can I solve this problem?
No theme, just fun.
In my lifetime, there’s been one ecosystem I deeply regret having missed out on: the Sun Microsystems ecosystem of the late 2000s. At that time, the company offered a variety of products that, when used together, formed a comprehensive ecosystem that was a fascinating, albeit expensive alternative to Microsoft and Apple. While not really intended for home use, I’ve always believed that Sun’s approach to computing would’ve made for an excellent computing environment in the home.
Since I was but a wee university student in the late 2000s living in a small apartment, I did not have the financial means nor the space to really test this hypothesis. Now, though, Sun’s products from that era are decidedly retro, and a lot more approachable – especially if you have incredibly generous readers. So sit down and buckle up, because we’ve got a long one today.
If you wish to support OSNews and longform content like this, consider becoming a Patreon or donating to our Ko-Fi. Note that absolutely zero generative “AI” was used in the writing of this article. No “AI” writing aids, no “AI” summaries, no ChatGPT, no Gemini search nonsense, nothing. I take pride in doing research and writing properly, without the “aid” of digital parrots with brain damage, and if there’s any errors, they’re mine and mine alone. Take pride in your work and reject “AI”.
In the early 2000s, it had already become obvious that the future of workstations lied not with custom architectures, bespoke processors, and commercial UNIX variants, but with standard x86, off-the-shelf Intel and AMD processors, and Windows and Linux. The writing was on the wall, everyone knew it, and the ensuing consolidation on x86 turned into a veritable bloodbath. In the ’80s and ’90s, many of these ISAs were touted as vastly superior x86 killers, but fast-forward a decade or two, and x86 had bested them all in both price and performance, leaving behind a trail of dead ISAs.
Never bet against x86.
Virtually none of the commercial UNIX variants survived the one-two punch of losing the ISA they were married to and the rising popularity of Linux in the workstation space. HP-UX was tied to HP’s PA-RISC, and both died. SGI’s IRIX was tied to MIPS, and both died. Tru64 was tied to Alpha, and both died. The two exceptions are IBM’s AIX and Sun’s Solaris. AIX workstations were phased out, but AIX is still nominally in development for POWER servers, but wholly inaccessible to anyone who doesn’t wear a suit and has a massive corporate spending budget. Solaris, meanwhile, which had long been available on x86, saw its “own” ISA SPARC live on in the server space until roughly 2017 or so, and was even briefly available as open source until Oracle did its thing. As a result, Solaris and its derivative Illumos are still nominally in active development, but in the grand scheme of things they’re barely even a blip on the radar in 2025.
Never bet against Linux.
During these tumultuous times, the various commercial UNIX vendors all pushed out systems that would become the final hurrahs of their respective UNIX workstation lines. DEC, then owned by HP, released its AlphaStation ES47 in 2003, marking the end of the road for Alpha and Tru64 UNIX. HP’s own PA-RISC architecture and HP-UX met their end with the HP c8000 (which I own), an all-out PA-RISC monster with two dual-core processors running at 1.1GHz. SGI gave its MIPS line of machines running IRIX a massive send-off with the enigmatic and rare Tezro in 2003. In 2005, IBM tried one last time with the IntelliStation POWER 285, followed a few months later by the heavily cut-down 185, the final AIX workstation.
And Sun unveiled the Ultra 45, its final SPARC workstation, in 2006. Sun was already in the middle of its transition to x86 with machines like the Sun Java Desktop System and its successors, the Ultra 20 and 40, and then surprised everyone by reviving their UltraSPARC workstation line with the Ultra 25 and 45, which shared most – all? – of their enclosures with their x86 brethren. They were beautiful, all-aluminium machines with gorgeous interior layouts, and a striking full-grill front, somewhat inspired by the PowerMac G5 of that era.
And ever since the Ultra 45 was rumoured in late 2005 and then became available in early 2006, I’ve been utterly obsessed with it. It’s taken almost two decades, but thanks to an unfathomably generous donation from KDE e.V. board member and FreeBSD contributor Adriaan de Groot, a very unique and storied Sun Ultra 45 and a whole slew of accessories showed up at my doorstep only a few weeks ago. Let’s look back upon this piece of history that is but a footnote to most, but a whole book to me – and experience Sun’s ecosystem from around 2006, today.
First and foremost, I want to express my deep gratitude to Adriaan de Groot. Without him, none of this would have been possible, and I can’t put into words how grateful I am. He donated this Ultra 45 to me at no cost – not even the cost of shipping – and he also shipped another box to me containing a few Sun Ray thin clients, completing the late 2000s Sun ecosystem I now own. Since the Ultra 45 was technically owned by KDE e.V. – more on that below – I’d also like to thank the KDE e.V. Board for giving Adriaan permission for the donation. I’d also like to thank Volker A. Brandt, who sent me a Sun Ray 3, a few Ultra 45 hard drive brackets, and some other Sun goodies.
The Sun Ultra 45 De Groot sent me was a base model with an upgraded GPU. It had a single UltraSPARC IIIi 1.6Ghz processor, 1GB of RAM, and the most powerful GPU Sun ever released for its SPARC workstation line, the Sun XVR-2500, a rebadged 3Dlabs Wildcat Realizm with 256MB of GDDR3 memory. Everything else you might need – sound, networking, and so on – are integrated into the motherboard. It also comes with a slot-loading, slimline DVD drive, a 250GB 7200 RPM SATA hard drive, and its massive 1000W power supply.
First order of business was upgrading the machine to match the specifications I wanted, with the most important upgrade being doubling the processor count. Finding a second 1.6Ghz UltraSPARC IIIi processor was easy, as they’re all over eBay and won’t cost you more than a few dozen euro excl. any shipping; they were also used in various Sun SPARC servers and are thus readily available. The bigger issue is finding a second CPU cooler, as they are entirely custom for Sun hardware and quite difficult to find. I found a seller on eBay who had them in stock, but be prepared to pay out the nose – I paid about €40 for the CPU, but around €160 for the cooler, both excl. shipping.
Installing the second CPU and cooler was a breeze, as it’s no different than installing a CPU or cooler on any other, regular PC. The processor was detected properly by the machine, and the cooler whirred to life without any issues, but of course, if you’re buying used you may always run into issues with parts. If you want to save some money, there is a way to use a specific cooler from a Dell workstation instead (and possibly others?), but I wanted the real deal and was willing to pay for it.
The second upgrade was the RAM. A mere 1GB wasn’t going to cut it for me, so alongside the processor and cooler I also ordered a set of four 1GB RAM sticks, the exact right kind, and ECC registered, too, as the machine demands it. This turned out to be a major issue, as I discovered the machine simply would not boot in any way, shape, or form with this new RAM installed. It didn’t even throw up any error message over serial, and as such, it took me a while to pinpoint the issue. Thankfully, I remembered I had a broken, non-repairable Sun server from the same era as the Ultra 45 lying around, and it just so happened to have 8✕1GB Sun-branded RAM sticks in it. I pilfered the sticks out of the server, stuck them in the Ultra 45, and the machine booted up without any problems.
I later learned from people on Fedi who used to work with Sun gear from this era that RAM compatibility was always a major headache. It seems the wisest thing to do is to just buy Sun-branded memory kits, because there’s very little guarantee any generic RAM will work, even if it is entirely identical to whatever sticks Sun slapped its brand stickers on. For now, 8GB is enough for me, but in a future moment of weakness, I may order 8✕2GB Sun-branded memory to max the Ultra 45 out. The main reason you may want to invest in a decent amount of RAM is to make ZFS on Solaris 10 happy, so take that into account.
Aside from these upgrades to the base system itself, I also planned two specialty upgrades in the form of two unique expansion cards. First, a Sun Flash Accelerator card to speed up ZFS’s operations on the spinning hard drive, and second, a SunPCi IIIpro, which is an entire traditional x86 PC on a PCI-X card, the final iteration of a series of cards designed specifically for allowing Solaris SPARC users to run Windows inside their workstation. I’ll detail these two expansion in more detail later in the article.
The Sun Ultra 45 was launched as one of four brand new Sun workstations, with an entirely new design shared between all four of them. Two were successors to Sun’s first (okay, technically second) foray into x86 workstations, the Java Workstation: the Ultra 20 (single-socket Opteron) and Ultra 40 (dual-socket Opteron). These were mirrored by the Ultra 25 (single-socket UltraSPARC IIIi) and Ultra 45 (dual-socket UltraSPARC IIIi). However, where the Ultra 20/40 were genuine improvements over their Java Workstation predecessors, the story gets a bit more muddled when it comes their SPARC brethren.
Let’s take a look at the most powerful direct predecessor of the Ultra 45, the Sun Blade 2500 Silver. The table below lists the core specifications of the Blade 2500 Silver compared to the Ultra 45. Notice anything?
| 2500 Silver (2005) | Ultra 45 (2006) | |
| CPU | 2×UltraSPARC IIIi 1.6Ghz | 2×UltraSPARC IIIi 1.6Ghz |
| CPU cache | 64KB data 32KB instruction 1MB L2 cache | 64KB data 32KB instruction 1MB L2 cache |
| RAM | DDR SDRAM (PC2100) 8 slots 16GB max. ECC registered | DDR SDRAM (PC2100) 8 slots 16GB max. ECC registered |
| GPUs | XVR-100 XVR-600 XVR-1200 | XVR-100 XVR-300 XVR-2500 |
| PCI | 6 PCI slots 64bit 3×33/66MHz 3×33MHz | 2×PCIe size ×16/lanes ×8 1×PCIe size ×8/lanes ×4 2×PCI-X 100MHz/64bit |
| Storage interface | 1×Ultra160 SCSI | SAS/SATA controller 4 disks |
As you can see, the Ultra 45 was only a very modest upgrade to the Blade 2500 Silver. While the upgrades the Ultra 45 brings over its predecessor are very welcome, it’s not like upgraded expansion slots and the move to SAS/SATA would make Blade 2500 owners rush out to upgrade in droves. For heavy graphics users, the new XVR-2500 graphics card may have been tempting as it is inherently incompatible with the Blade 2500 (it uses PCIe), but I have a feeling many customers at the time would’ve probably just opted to move to x86 instead. For all intents and purposes, the Ultra 45 was a slim upgrade to its predecessor.
The story gets even more problematic for the SPARC side of Sun’s workstation business when you consider the age of the 2500 line. While the 2500 Silver was released in early 2005, its only upgrade compared to the 2500 Red was a clock speed bump (from 1280Mhz to 1600MHz), and the 2500 Red was released in 2003. This means that the Ultra 45 is effectively a computer from 2003 with improved expansion slots and a fancy new case. As a final knock on the Ultra 45, its processor had already been supplanted by Sun’s first multicore SPARC processors, the UltraSPARC IV in 2004 and the UltraSPARC IV+ in 2005, and Sun’s first multicore, multithreaded processor, the UltraSPARC T1, in 2005. These chips would never make it to any workstations, being used in servers exclusively.
Sun clearly knew further investments in SPARC workstations were simply not worth it at the time, and thus opted to squeeze as much as it could out of a 2000-2003ish platform, instead of investing in the development of a brand new workstation platform built around the UltraSPARC IV/IV+/T1. In other words, while the Ultra 45 is the last and most powerful SPARC workstation Sun ever made, it wasn’t really the balls-to-the-wall SPARC workstation sendoff it could’ve been, and that’s a shame.
Now that we have a good idea of where the Ultra 45 stood in the market, let’s take a closer look at my specific machine. My Ultra 45 is not just any machine, but actually a pre-production model, or, in Sun parlance, an “NSG EARLY ACCESS EVALUATION UNIT”. The bright orange sticker on the side and the big yellow sticker at the top make it very clear this isn’t your ordinary Ultra 45.


I’ve removed and cleaned some other sticker residue, but these will remain exactly where they are, as I consider them crucial parts of its unique history. The fact it’s a pre-production unit means there are some very small differences between this particular machine and the final version sold to consumers. The biggest difference is found on the inside, where my model misses the two RAM air ducts found on the final version; my wild guess is that during late-stage testing they discovered the RAM could use some extra fresh air, and added the ducts.
Another difference inside is that the original CPU cooler, which came with the machine, is purple, while the second CPU cooler I bought off eBay is silver. As far as I can tell based on checking countless photos online, all CPU coolers on final models were silver, making my single purple cooler an oddity. I’d love to know the story behind the purple cooler – Sun used purple a lot in its branding and hardware design at this point, and it could be that this is a cooler from one of their many server models. Other than the colour, it’s entirely identical to its silver counterpart.




On the outside, the only sign this is a pre-production model – other than the stickers – is the fact that the ports and LEDs on the front of the device are unlabeled, while the final models had nice and clear labels. My machine also lacked the “Ultra 45” badge at the bottom of the front panel, so I did a silly and spent around €50 (incl. shipping and EU import duties) on a genuine replacement. It clicks into place in a dedicated hole in the metal meshwork.
It’s the little details that matter.
On the front of my Ultra 45, there’s a strong hint as to it history: a yellow label maker sticker that says “KDE project”. Sun’s branch in Amersfoort, The Netherlands, donated (or loaned out?) this particular Ultra 45 to KDE e.V. back in 2008 or 2009, so that the KDE project could work on KDE for Solaris and SPARC. You can even find blog posts by Adriaan de Groot about this very machine from that time period. It served that function for a few years, I would guess up until around 2010, when Oracle acquired Sun and subsequently took Solaris closed-source again.


Since then, it’s mostly been sitting unused in Adriaan’s office, until he offered to send it to me (after confirming with KDE e.V. it could be donated to me). Considering KDE is an important part of the machine’s history, I’m leaving the little KDE label right where it is. Perhaps Sun sent out its preproduction machines to people and projects that could make use of it, which was a nice – and a little self-serving, of course – gesture. Now it’s getting yet another lease on life as by far my favourite (retro)computer in my collection, which is pretty neat.
Once I had the machine set up and booting into the OpenBoot prompt, it was time to settle on the software I’d be running on it. Since I tend to prefer setting up machines like this as historically accurate as is reasonable, Solaris 10 was the obvious choice. Luckily, Oracle still makes the SPARC version of Solaris 10 available in the form of Solaris 10 1/13 as a free download. This article won’t go too deep into operating system installation and configuration – it’s straightforward and well-documented – but I do have a few notes I’d like to share.
First and foremost, if you intend to use ZFS as your file system – and you should – make sure you have enough RAM, as mentioned earlier, but also to start the installer in text mode. You can’t install on ZFS when using the graphical installer, in which you’ll be restricted to UFS. Both variants of the installer are easy to use, straightforward, and a breeze to get through for anyone reading OSNews (or anyone crazy enough to buy SPARC hardware in 2025). If you’ve never worked with SPARC hardware and Sun’s OpenBoot before, have a list of ok> prompt commands at hand to boot the correct devices and change any low-level hardware settings.
The 1/13 in Solaris 10 1/13 means the DVD ISO is up-to-date as of January 2013, and sadly, Oracle hides post-1/13 patchsets behind support contract paywalls, so you won’t be getting them from any official sources. There’s a few 2018 and 2020 patchsets floating around, as well as collections of individual patch files, but I’ve some issues with those. One of the major issues I ran into with a more recent patchset is that it broke the Solaris Management Console, a Java-based graphical tool to manage some settings. There is a fix, but it’s hidden behind Oracle’s dreaded support contract paywalls, so I couldn’t do anything about it.
I’m sure a later version of the Solaris 10 patchset – they’re still being made twice a year, it seems – addressed this issue, but none of those patchsets ‘leaked’ online. I did try to install the individual patches in the massive patchset one-by-one to avoid potentially problematic ones identified by their description, but it was a hell of a lot of work that felt never-ending, since you also have the dependency graph to work through and track. After a few hours of this nonsense, I gave up. I would love for Oracle to stop being needlessly protective over a bunch of patchsets for a dead operating system running on a dead architecture, but I don’t own a massive Hawaiian island so I guess I’m the idiot.
One of the things you’ll definitely want to do after installing Solaris 10 is set up OpenCSW. OpenCSW is a package manager and associated repository of Solaris 10-native SVR4 packages for a whole bunch of popular and useful open source programs and tools, with dependency tracking, update support, and so on. It’s incredibly easy to set up, just as easy to use, and installs its packages in /opt/csw by default, for neat separation. As useful as OpenCSW is, though, it’s important to note that most packages have not been updated in years, so it’s not exactly a production-ready environment. Still, it contains a ton of tools that make using Solaris 10 on SPARC in 2025 a hell of a lot easier, all installable and manageable through a few simple commands.

I have a few other random notes about using Solaris 10 on a workstation like this. First, and this one is obvious, be sure to create a user for your day-to-day use so you don’t have be logged in as the root user all the time. If you intend to use the Solaris Management Console, which offers a graphical way to manage certain aspects of your machine, you’ll want to create the Primary Administrator role and assign it to your user account. This way, you can use the SMC even through your regular user account since it’ll ask you to log into the primary administrator role.
Second, assuming you want to do some basic browsing and emailing, you’ll also want to install the latest possible versions of Firefox and Thunderbird, namely version 52.0 of both. You can either opt for basic tarball installation, or use the SVR4 packages available from UNIX Packages to make installation a little bit easier. Version 52.0 of Firefox is severely outdated, of course, so be advised; tons of websites won’t work properly or at all, and security is obviously out the window. A newer version will most likely not be released since that would require an up-to-date Rust port and toolchain for Solaris 10/SPARC as well, which isn’t going to happen.
In addition, if you’ve set up OpenCSW, you should consider adding /opt/csw/bin to your PATH, so that anything installed through OpenCSW is more easily accessible. Furthermore, Solaris 10 installs both CDE and the Java Desktop System – GNOME 2.6 with a fancy Sun theme – and I highly suggest using the JDS since it was properly maintained at the time, while CDE had already stagnated for years at that point. It’ll give you niceties like automatic mounting of USB sticks and DVDs/CDs, and make it much easier to access any possible network locations. Speaking of which – you’ll want to set up a SAMBA or NFS share so you can easily download files on a more modern machine, and subsequently make them accessible on your Solaris 10 machine. Both of these protocols are installed by default.
As a final note, there are three sources I use to find ancient software for these older UNIX systems (I use both Solaris 10 and HP-UX): fsck.technology, whatever this is, and the Internet Archive. You can find an absolutely massive pile of programs, software, operating system patches, and everything else in these three sources, including various ways to circumvent any copy protection schemes. I don’t care about the legality, and neither should you.
If you want to go for something more modern than Solaris 10, SPARC is still supported by a variety of operating systems, like NetBSD, OpenBSD, and a number of Linux distributions. Your best bet is to buy one of the lower-end GPUs, like the XVR-300 or XVR-600, as the XVR-2500 is not supported by the BSDs, but may work on Linux. I haven’t tried any of them yet – this article is long enough as it is – but I will definitely try them out in the future.
The future island owners among you may also be wondering about Illumos and its various derivatives and distributions, like OpenIndiana and personal OSNews darling Tribblix. While they all do support SPARC, it’s spotty at best, especially on workstations like the Ultra 45. SPARC servers have a better success rate, but the Ultra 45 specifically is unsupported at this point due to bugs preventing Illumos and friends from even booting. The good news, though, is that the people working on the SPARC variants have access to Ultra 45 machines, and work is being done to fix these issues.
Now, let’s move on to the two specialty upgrades I bought for this machine.
The transition from spinning hard disk drives to solid-state drives was an awkward time. Early on, SSDs were still prohibitively expensive, even at small sizes, but the performance benefits were obviously significant, and everyone knew which way the wind was blowing. During this awkward time, though, people had to choose between a mix of solid state and spinning drives, leading to products like hybrids drives, which combined a small SSDs with a large hard drive to get the best of both worlds. As prices kept coming down, people could opt for a small SSD for their operating system and most-used applications, storing everything else on spinning drives.
A hybrid drive doesn’t necessarily have to exist as a single, integrated product, though; depending on factors like operating system, controller, and file system, you could also assign SSDs as dedicated accelerators. This is where Oracle’s line of Flash Accelerator cards – the F20, F40, and F80 – come into play. These were released starting in roughly 2010, and consisted of several replaceable flash memory modules on a PCIe card. They were rebranded LSI Nytro Warpdrives with some custom firmware, which can actually be flashed back to their generic LSI firmware to turn them into their white label LSI counterparts.
Oracle’s Flash Accelerator cards are remarkably flexible, because their firmware presents the individual flash modules as individual block devices to the Solaris 10 operating system. This way, you can assign each individual module to perform specific tasks, which, combined with the power of Solaris 10’s ZFS, gives people who know what they are doing quite a few options to speed up specific workloads. In addition – and this is pretty cool – these accelerator cards can also serve as a boot device, meaning you can install and run Solaris 10 straight from the accelerator card itself.
These cards come in a variety of sizes, and they’re incredibly cheap these days. They’re not particularly useful or economical for modern applications, but they’re still fun relics from an older time. And because they’re so cheap and plentiful on the used market, they’re a great addition to a retro project like my Ultra 45 – even if they’re technically intended for server use. I ordered a Flash Accelerator F20 on eBay for like €20 including shipping, giving me 96GB, spread out over four 24GB flash modules, to play with.
The card has two stacks of two flash modules, which can be removed and replaced in case of failure, as well as a replaceable battery. Sadly, the one I ordered didn’t come with the full-height PCI bracket, but even without any bracket, the card sits incredibly firmly in its slot. The card also functions as a host bus adapter, giving you two additional SAS HBA ports for further storage expansion. Do note that you’ll need to perform a reconfiguration boot of your SPARC system after installing the card, which is done by first dropping to the ok> prompt, and then executing boot -r. Once rebooted, the format command should display the four flash modules.
# format
Searching for disks…done
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <ATA-HITACHIHDS7225S-A94A cyl 65533 alt 2 hd 16 sec 465>
/pci@1e,600000/pci@0/pci@9/pci@0/scsi@1/sd@0,0
1. c3t0d0 <ATA-MARVELLSD88SA02-D21Y cyl 23435 alt 2 hd 16 sec 128>
/pci@1e,600000/pci@0/pci@8/LSILogic,sas@0/sd@0,0
2. c3t1d0 <ATA-MARVELLSD88SA02-D21Y cyl 23435 alt 2 hd 16 sec 128>
/pci@1e,600000/pci@0/pci@8/LSILogic,sas@0/sd@1,0
3. c3t2d0 <ATA-MARVELLSD88SA02-D21Y cyl 23435 alt 2 hd 16 sec 128>
/pci@1e,600000/pci@0/pci@8/LSILogic,sas@0/sd@2,0
4. c3t3d0 <ATA-MARVELLSD88SA02-D21Y cyl 23435 alt 2 hd 16 sec 128>
/pci@1e,600000/pci@0/pci@8/LSILogic,sas@0/sd@3,0
Specify disk (enter its number):
Now it’s time to decide what you want to use them for. I’m not a system administrator and I have very little experience with ZFS, so I went for the crudest of options: I assigned each module as a ZFS cache device for the ZFS pool I have Solaris 10 installed onto, which is stupidly simple (the exact disk names can be identified using format):
# zpool add -f <pool name> cache <disk1> <disk2> <disk3> <disk4>
To check the status of your pool and make sure the modules are now acting as cache devices:
# zpool status
pool: ultra45
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
ultra45 ONLINE 0 0 0
c1t0d0s0 ONLINE 0 0 0
cache
c3t0d0s0 ONLINE 0 0 0
c3t1d0s0 ONLINE 0 0 0
c3t2d0s0 ONLINE 0 0 0
c3t3d0s0 ONLINE 0 0 0
errors: No known data errors
The theory here is that this should give the 7200 RPM SAS drive the ZFS pool in question is running on a nice performance boost. Now, this is mostly theory in my particular case, since I’m not using this machine for any heavy workloads in 2025, but perhaps if you were doing some heavy lifting back in 2010 on your Solaris 10 workstation, you might’ve actually seen some benefit.
Of course, this is anything but an optimal setup to get the most out of this hardware, but I already had a fully configured Solaris 10 install on the spinning hard drive and didn’t feel like starting over. Like I said, I’m no system administrator or ZFS specialist, but even I can imagine several better setups than this. For instance, you could install Solaris 10 in a ZFS pool spanning two of the flash modules, while assigning the remaining two flash modules as log and cache devices for the spinning hard drives you keep your data files on. In fact, Oracle still has a ton of documentation online about creating exactly such setups, and it’s not particularly hard to do so.
This F20 card wasn’t part of my original planning, and the only reason I bought it is because it was so cheap. It’s a fun toy you could buy and use on a whole variety of older systems, as long as they have PCIe slots and compatibility with PCIe storage. The entire card is just a glorified HBA, after all, and many operating systems from the past 20 years or so can handle such cards and its flash storage just fine.
Let’s move on to something more interesting – something I’ve been dying to use ever since I learned of their existence decades ago.
Even in the ’90s, much of the computing world – especially when it came to generic office and home use – had already moved firmly to x86 and Windows. Sun knew full well that in order to entice more customers to even consider using SPARC-based workstations, they needed to be interoperable with the x86 Windows world, since those were the kinds of machines their SPARC workstations would have to interoperate with. So, from quite early on in the 1990s, they were working on solutions to this very problem.
Sun’s first solution was Wabi, a reimplementation of the Win16 API to allow a specific set of popular Win16 applications to run on non-x86 UNIX workstations. This product was licensed by other companies as well, with IBM, HP, and SCO all releasing their own versions, and eventually it was even ported to Linux by Caldera in 1996. Another solution Sun offered at the same time as Wabi was SunPC, a PC emulator based on technology used in SoftPC. SunPC was limited to at most 286 software, however, so if you wanted to emulate software that required a 386 or 486 – like, say Windows 3.x or 95 – you needed something more.
And it just so happens Sun offered something more: the SunPC Accelerator Card. This line of accelerator cards, for SBus-based SPARC workstations, contained a 486 processor (and one later model an AMD 5×86 processor) on an expansion card that the SunPC emulator could use to run x86 software that required a 386 or 486. With this card installed, SPARC users could run full Windows 3.x or Windows 95 on their workstations, albeit with a performance penalty as the SunPC Accelerator Card did not contain any memory; SunPC had to emulate the RAM.
With Sun’s SPARC workstations moving to more standard PCI-based expansion busses in the second half of the 1990s, Sun would evolve their SunPC line into the SunPCi (clever), and that’s when this product line really hit its stride. Instead of containing just an x86 processor, SunPCi cards also contained memory, a graphics chip, sound chip, networking, VGA ports, serial ports, parallel ports, and later USB and FireWire as well. A SunPCi card is genuinely an entire x86 PC on a PCI expansion card, and the operating system running on that x86 PC can be used either in a window inside Solaris, or by connecting a dedicated monitor, keyboard, mouse, speakers, and so on. Or both at the same time!
In the late 1990s and early 2000s, Sun would release a succession of ever faster models of SunPCi cards, culminating in the last and most powerful variant: the SunPCi IIIpro, released in 2005. This very card is one of the reasons I was so excited to get my hands on a machine that could do it justice, so I splurged on a new-in-box model offered on eBay. This absolute behemoth of a PCI-X card contains the following PC hardware:
The base card contains a VGA port, a USB port, Ethernet port, and audio in and out. The number of ports can be expanded with two optional daughter cards, one of which adds two more USB ports as well as a FireWire port, while the other one adds a serial and parallel port. These two daughter boards each require an additional PCI slot, but only the USB/FireWire one actually makes use of a PCI-X connector. In other words, if you install the main card and its two daughter boards, you’ll be using up three PCI slots, which is kind of insane.
By default, the card only comes with a single 256MB DDR SODIMM, which is a bit anemic for many of the operating systems it supports – as such, I added an additional 512MB DDR SODIMM for a total of 768MB of RAM. Unlike the Ultra 45 itself, it seems the SunPCi IIIPro is not particularly picky about RAM, so you can most likely dig something up from your parts pile and have it work properly. The card has a few other expansion options too, like an IDE header so you can use a real hard disk instead of an emulated one, but that would require some hacking inside the Ultra 45 due to a lack of power options for IDE hard drives.
Once you have installed the card – a fiddly process with the two daughterboards attached – it’s time to boot the host machine back up and install the accompanying SunPCi software. The last version Sun shipped is SunPCi Software 3.2.2 in 2004, and in order to make it work on my Ultra 45 I had to perform some workarounds. Searching the web seems to indicate the problems I experienced are common, so I figured I’d collect the problems and workarounds here for posterity, so I can spare others the trouble.
What you’ll need is the SunPCi Software 3.2.2, the latest version; you can find this in a variety of locations around the web, including in the software repositories I mentioned earlier in the article. The installation is fairly straightforward, but the post-install script might throw up an error about being unable to find a driver called sunpcidrv.2100. The fix is simple, but the odds of finding this out on your own are slim. Once the installation is completed, run the following commands as root to symlink to the correct drivers:
cd /opt/SUNWspci3/drivers/solaris/
ln -s sunpcidrv.280 sunpcidrv.2100
ln -s sunpcidrv.280.64 sunpcidrv.2100.64
The second problem you’ll most likely run into is absolutely hilarious. If you try and start the SunPCi software with /opt/SUNWspci3/bin/sunpci, you’ll be treated to this gem:
Your System Time appears to be set in the future
I can't believe it's really Tue Nov 11 18:01:23 2025
Please set the system time correctly
This error message comes from a bug in the 3.2.2 release that was fixed in patch 118591-04, so you’ll have to download that patch (118591-04.zip) from any of the countless repositories that hold it (here, here, here, etc.) and install it according to the instructions to remove this time bomb. I’m glad people have been willing to share this patch in a variety of places, because if this one remained locked behind donations to the Larry Ellison Needs More Island Fund I’d be pretty upset.
Once installed, the SunPCi software should start just fine, greeting you with a dialog where you can configure your emulated hard drive and select the operating system you wish to install. Provided you have the correct operating system installation disc, the operating system you select will automatically be installed onto the emulated hard drive. You’ll also see proof that yes, this card is really just a regular, run-of-the-mill PC: it boots up like one, it has a BIOS like any other PC, you can enter this BIOS, and you can mess around with it. It’s really just a PC.
Sun put a lot of effort into making the operating system installation process as seamless and straightforward as possible; in the case of Windows XP, for instance, the SunPCi software will copy the contents of your Windows XP disc to a temporary location, and slipstream all the necessary drivers and some other software (specifically Java Web Start, of course) into the Windows XP installation process. In other words, once the installation is completed and you end up at the Windows XP desktop, all proper device drivers have been installed and you’re ready to start using it.



The amount of effort and thought Sun put into this product shines through in other really nice touches as well. For instance, inserting a CD or DVD into the Ultra 45’s drive will not only automatically mount it in Solaris, but also inside Windows XP – autorun and all. Making folders on the host’s file system available inside Windows XP is also an absolute breeze, as you can mount any folder on the host system inside Windows XP using Explorer’s Map Network Drive feature: \\localhost\home\thomholwerda, for instance, will make /home/thomholwerda available in Windows. You can also copy and paste text between host and client, and SunPCi offers the option to grow the virtual hard drive you’re using in case you need more space.


The installation procedure installs two different video drivers in Windows XP: one for the S3 Graphics ProSavageDDR, and one for the SunPCi Video. The former drives any external display connected to the SunPCi card, while the latter outputs to a window inside Solaris. If you need to do any graphics or video-related work, Sun strongly suggests you use the S3 chip by using an external display, and it’s obvious why. The performance of the SunPCi Video is so-so, and definitely feels like it’s rendering in software (which it is), so you can expect some UI stutters here and there. A nice touch is that there’s no need for the SunPCi window to “capture” the mouse pointer manually, as you can freely move your Solaris cursor in and out of the SunPCi window.
As for the performance of Windows XP – it will align more or less with what you can expect from a mobile Athlon from 2002, so don’t expect miracles. It’s entirely usable for office and related tasks, but you won’t be doing any hardcore gaming or complex, demanding professional work. The goal of this card is not to replace a dedicated x86 workstation, but to give Solaris/SPARC users access to the various office-related applications most organisations were using at the time, like Microsoft Office, IBM’s Domino, and so on, and it achieves that goal admirably.
There’s a ton of other things you can do with this card that I simply haven’t had the time yet to dive into (this article is already way too long), but that I’d like to come back to in the future. For instance, the list of officially supported operating systems includes not just Windows XP, but also Windows 2000, Server 2003, and a variety of versions of Red Hat (Enterprise) Linux (think Linux version ~2.4.20). The SunPCi software also contains an entire copy of DR-DOS 7.01, which is neat, I guess. Lastly, the user manual for the SunPCi software lists a whole lot of advanced features and tweaks you can play with, too.
I would also be remiss to note that you can actually use multiple SunPCi cards in a single machine, as that’s a fully supported configuration. You can totally get a big SPARC server, put multiple SunPCi cards in it, and let users log in remotely to use them, perhaps using Sun’s true thin client offering, Sun Rays. This is foreshadowing.
As for other operating systems – I’ve seen rumblings online that versions of NetBSD and Debian from the early 2000s were made to work on the SunPCi II (the previous model to what I have), but I can’t find any information on anything else that might work. The issue is that any operating system running on the card needs drivers for the emulated hard disk, which are obviously not available as those were made by Sun. Since the SunPCi IIIpro has an actual IDE connector, though, I’ve been wondering if it would at least be possible to boot and run an “unsupported” operating system using the external method (dedicated display, mouse, keyboard, etc.). If there’s interest, I can dive into this in the future and report back.
All in all, though, the SunPCi IIIPro is a much more thoughtful and pleasant product to use than I originally anticipated. Sun clearly put a lot of thought into making this card and its features as easy to use as possible, and I can totally see how it would make it palatable to use a SPARC workstation in an otherwise Windows-based corporate environment. Just load up Outlook or whatever Windows-based groupware thing your company used using the SunPCi software, and use it like any other application in your Solaris 10 SPARC environment. Since you can set the SunPCi software to load at boot, you’d probably mostly forget it was running on a dedicated PC on an expansion card, and your colleagues would be none the wiser.
To round out the Sun Microsystems ecosystem of the late 2000s, we really can’t get around explaining why the network is the computer. It’s time to talk Sun Ray.
During most of its existence, Sun’s slogan was the iconic “The network is the computer“, coined in the early 1980s by John Gage, one of the earliest employees at Sun. Today, the idea behind this slogan – namely, that a computer without a network isn’t really a computer – is so universally true it’s difficult to register just how forward-thinking this slogan was back in 1984. These days, everything with even a gram of computer power is networked, for better or worse, and the vast majority of people will consider any PC, laptop, smartphone, or tablet without a network connection to be effectively useless.
Gage was right, decades before the world realised it.
The product category that embodies Sun’s iconic slogan more than anything is the thin client, and Sun played a big role in this market segment with their line of Sun Ray products. The Sun Ray product line consisted of a variety of products, but the main two components were the Sun Ray Server Software and the various Sun Ray thin clients Sun (and Oracle) produced between 1999 and 2014. The server component would run on a server (or workstation, as we’ll see in a moment), and the Sun Ray client devices would connect to said server over the network.
The idea was that you had a giant server somewhere in your building, running the Sun Ray Server Software, accompanied by whatever number of Sun Ray thin clients you needed on employees’ desks. Each of your employees would have a user account on the server, and could log into that user account using any of the Sun Ray thin clients in the building. The special ingredient was the fact that Sun Rays were stateless, which meant that the thin clients themselves stored zero information about the user’s session; everything was running on the server.
This special ingredient made some real magic possible, most notably hotdesking, which, admittedly, sounds like something LinkedIn professionals do on OnlyFans, but is actually way cooler. You could roam from one Sun Ray to the next, and your desktop, including all the applications you were running and documents you had opened, would travel with you – because they were running on the server. Sun also bet big on smartcards, so instead of logging in with a traditional username and password, you could also log in simply by sliding your smartcard into the card reader integrated into every single Sun Ray. Take your smartcard out, and your session would disappear from the display, ready to continue where you left off on any other Sun Ray.
And yes, you could make this work across the internet as well.
Reading all of this, you may assume Sun Rays and hotdesking involved a considerable amount of jank, but nothing could be further from the truth. I’ve been fascinated by thin clients in general, and Sun Rays in particular, for decades, but I never had the hardware to properly set up a Sun Ray environment at home – until now, of course. With the Ultra 45 all set up and running, and the generous Sun Ray-related donations from Adriaan and Volker, I had everything I needed to set up some Sun Rays. If the Ultra 45 is the central hub, Sun Rays are its spokes.
Let’s hotdesk like it’s 2007.
I expected setting up a working Sun Ray environment would be a difficult endeavour, but nothing could be farther from the truth. It turns out that installing, setting up, and configuring the Sun Ray Server Software is incredibly easy, and Nico Maas made it even easier back in 2009 by condensing the instructions down to the bare essentials (Archive.org link just in case). After following Maas’ list of steps (you can skip the personal notes section at the end if you’re not using a dedicated network card for the Sun Ray Server Software), any Sun Ray you connect to your network and turn on will automatically find the Sun Ray Server, perform any possible firmware updates, and show a login screen.
From here, you can log into any user account on the Sun Ray Server (the Ultra 45, in my case) as if you’re sitting right behind it. Depending on which generation of Sun Ray you’re using, loading your desktop will either be fast, faster, or near instantaneous. Thanks to Sun’s network display protocol, the Application Link Protocol, performance is stunningly good. Even on the very oldest Sun Ray 1 device I have, it feels genuinely like you’re using the machine you’re remotely logged into locally.
As part of Maas’ instructions, you also installed Apache Tomcat, included in the Sun Ray Server Software’s zip file, which is a necessary component for the graphical configuration and administration utility. Since we’re talking 2000s Sun, this administration GUI is, of course, a web application written in Java, accessible through your browser (I suggest using the copy of Netscape included in Solaris 10) by browsing to your server’s IP address at port 1660. After logging in with your credentials, you’ll discover a surprisingly nice, capable, and detailed set of configuration and administration panels, which mirror many of the CLI configuration tools. Depending on your preference, you may opt to use the CLI tools instead, but persoally, I’m an absolute sucker for 2000s enterprise GUIs.

While there’s a ton of configuration options to play around with here, the ones we’re looking for have to do with setting up smartcards so we no longer have to use bourgeois banalities like usernames and passwords. To enable logging in with a smartcard, you’ll obviously need a smartcard – I have a Sun-branded one, which are objectively the coolest – but there are other options. You’ll then need to read the token on said smartcard, and associate that token with your user account. As a final step, you need to utterly wreck the security of your setup by enabling passwordless smartcard login.
This process is fairly straightforward, but there a few arbitrary details you need to be aware of. First, you need to designate a Sun Ray as a card reader by going to Desktop Units, and selecting the Sun Ray unit you’d like to use as a card reader by clicking Edit and under Advanced, select “Desktop unit is used as token reader” – click OK and perform the cold restart of the Sun Ray Server Software as instructed. Once the restart is completed, make sure the smartcard you wish to use is inserted, then go to the Tokens tab, click on New…, and you’ll see that the token is already read and selected. Enter your username next to “Owner:”, and save. Make sure to undo the “Desktop unit is used as token reader” setting, and you’re good to go.


If you wish to log in entirely passwordless – as in, you just need to insert the smartcard, no typed credentials required – you need to go to Advanced > System Policy, scroll down to “Session Access when Hotdesking”, and tick the box next to “Direct Session Access Allowed”. It should go without warning that this is quite insecure, as someone would just need to yoink your smartcard to break into your account. For whatever that’s worth, on a retro environment in your own home.
Now you can properly hotdesk. Insert your smartcard into any connected Sun Ray, and your desktop will automatically appear, running applications and all. Take the card out, and the Sun Ray login screen will reappear. Wherever you insert your smartcard, your desktop will show up. This way, your session will travel with you no matter where you are – as long as there’s a Sun Ray to log into, you can continue working, even across the internet if that functionality has been enabled. Nothing about this is particularly complex technology-wise, but it absolutely feels like magic.


Let’s dive a little deeper into the Sun Ray clients. Sun (and later Oracle) produced a wide variety of them over the years, but roughly they can be divided up unto three generations. Disregarding the extremely rare early prototypes, the Sun Ray 1 is probably the most iconic model, as far as its design goes. It also happens to be the only model powered by a SPARC chip, the 100MHz microSPARC IIep accompanied by ATI Radeon 7000 graphics. The second generation switched from SPARC to MIPS, a 500MHz RMI Alchemy Au1550 (built by AMD) accompanied by ATI ES1000 graphics. The third generation of Sun Rays moved to either a 667MHz RMI Alchemy Au1380 for the base model, or the MIPS 750MHz RMI XLS104 for the Plus model, both with graphics integrated into the processor.
None of these core specifications really matter though, as the performance will mostly be identical. What really matters is the port selection, and the display resolution the Sun Ray unit is capable of outputting. I would strongly suggest opting for models with DVI output capable of handling at least 1920×1080, since full HD panels are easy to come by and you probably have a few lying around anyway. All models of Sun Ray have USB, audio, and Ethernet ports, so you’re good there, but the Sun Ray 2FS and Sun Ray 3 Plus also have fibre optic Ethernet options. You know, just in case you really want to go nuts.




Your eyes are not deceiving you. That’s a KDE-branded Sun Ray 2FS, and according to Adriaan, there’s only two of these in existence.
Sun also produced various Sun Ray models with integrated displays for that all-in-one experience, including one with a CRT. Beyond Sun, various third parties also made Sun Ray-compatible devices, offering form factors Sun didn’t explore, like laptops and even tablets. Sadly, these third party models seem to be exceedingly rare, and I’ve never seen one come up for sale anywhere. I would personally haul 16 tons and owe my soul to the King of Lanai’s company store to get my hands on a laptop and tablet model.
But what if you don’t want to deal with the hassle of real hardware? Thin clients or no, these things still take up space and require a ton of cabling and peripherals, which can be a hassle (I have three Sun Rays and their peripherals hooked up… On the floor of my office). Fret not, as Sun and later Oracle also released virtual Sun Ray client software, allowing you to log into the Sun Ray Server form any regular PC. Called the Sun Desktop Access Client or the Oracle Virtual Desktop Client, it’s a simple Java-based application for Linux, Solaris, Windows, and Mac OS, available in 32 and 64 bit variants. Alongside the entire Sun Ray lineup, this piece of software was retired in 2017, but it’s still freely available on Oracle’s website, given you manage to navigate Oracle’s byzantine account signup, login, and download process.
I only tested the version available for Linux, and to my utter surprise, it still works just fine! Since I run Fedora, I downloaded the 64bit RPM, and installed it with rpm -ivh --nodigest ovdc-3.2.0-1.x86_64.rpm. You’re going to need two dependencies, available from Fedora’s own repositories through the following packages: libgnome-keyring and libsnl. Once installed, the Oracle Virtual Desktop Client will appear in your desktop environment’s application menu, or you can start it with ovdc from the terminal. Before you start the application, though, you’re going to need to enable the option “Sun Desktop Access Client” in the Sun Ray Server Software web admin under Advanced > System Policy, and perform a warm restart.
You’ll be greeted by an outdated Java application, so on my 4K panel the user interface looks positively tiny, but otherwise, it works entirely as expected. Enter the IP address of the machine you’re running the Sun Ray Server Software on, and the login screen will appear as if you’re using a hardware Sun Ray client. I find it quite neat that this ancient piece of software – last updated in 2014, its RPM last updated in 2017 – still works just fine on my Fedora 43 machines. There were also Android and iOS variants of the Oracle Virtual Desktop Client, but they’re no longer available in their respective application stores, and the Android APK I downloaded refused to install on my modern Android device.


The Sun Ray ecosystem is, even in 2025, a versatile and almost magical experience. It’s incredibly easy to set up and use, but of course, effectively useless in that Solaris 10 and its GNOME 2.6-based desktop Sun Rays provide remote access to are outdated and lack the modern applications we need. Still, this entire exercise has given me an immense appreciated for what Sun’s engineers built back in the late ’90s and early 2000s, and I wish the Sun Ray ecosystem didn’t die out at Oracle alongside everything else island boy got his hands on.
But did it die out, though? Recently, I reported on the latest OpenIndiana snapshot, and wrote this:
A particularly interesting bullet point is maintenance work and improvements for Sun Ray support, and the changelog notes that these little thin clients are still popular among their users. I’m very deep into the world of Sun Rays at the moment, so reading that you can still use them through OpenIndiana is amazingly cool. There’s a Sun Ray metapackage that installs the necessary base components, allowing you to install Sun’s/Oracle’s original Sun Ray Server software on OpenIndiana. Even though MATE is the default desktop for OpenIndiana, the Sun Ray Server software does depend on a few GNOME components, so those will be pulled in.
↫ Thom Holwerda at OSNews
Now that you’re reading this article, it means the hold this project has had over my life has lessened somewhat, hopefully giving me some time to dive into OpenIndiana further. I’ve had issues getting it to boot and work properly on any of my devices, but knowing it’s still entirely compatible with Sun Ray means I might build a machine specifically for it. The sun must shine.
Could Sun’s ecosystem have made for an excellent computing environment at home? I’m realistic and realise full well that nobody was going to buy an expensive Ultra 45 or otherwise set up a SPARC server with a SunPCi card and Sun Rays just for home use. These were enterprise products with enterprise prices, after all. Still, I think the basic idea of a powerful central computer in the home – perhaps a server in the utility closet – accompanied by a number of cheap thin clients is sound. Most of our computers are sitting idle most of their lifetime, and there’s really no need for every member of a household having access to their personal overpowered desktop and/or laptop.
Twenty years ago, Sun’s ecosystem showed us that such a setup need not be complex, janky, or cumbersome, and with a bit more end-user focused polish it would’ve made for an amazing home computing environment. Of course, there’s far more profit to be made in selling multiple overpowered computing devices to each consumer, especially if you can also manage to force them into subscription software and “cloud” services, while showing them ads every step of the way.
I’ve only scratched the surface of everything you could possibly do with the hardware and software covered in this article, as I didn’t want to get too bogged down in the weeds. There’s other operating systems to try on the Ultra 45, there’s compatibility to explore with the SunPCi, there’s OpenIndiana to install to see just how hard it is to get the Sun Rays working with a modern operating system, and, of course, there’s a ton of finer details to tweak, fiddle with, and discover. I haven’t had this much fun with a bunch of computing devices in ages.
This deep dive into Sun’s ecosystem has consumed most of my life over the past few months. I know full well writing 10,000 word articles on dead Sun tech from the 2000s is not a particularly profitable use of my time. This isn’t going to draw scores of new readers to the OSNews Patreon and Ko-Fi and make me rich. For profit, I should be making YouTube videos with fast transitions and annoying sound effects to stir up drama based on GitHub discussions or LKML posts with my O face in the thumbnails.
But someone needs to show the world what we lost when Sun died and the industry enshittified. Tech has hit rock bottom, and I want to show everyone that yes, we can build something better.
We only have to look back at what we lost, instead of forward at what’s left to destroy.
Earlier this month I eluded to being humbled by an offer to contribute to something meaningful. My first episode is now out, so I can now share that I’m one of the new co-hosts for BSD Now!
/etc/hosts - Time to update our /etc/hosts file…
I’ve mentioned BSD Now here many times, but for those unitiated, it’s a podcast where a revolving panel of hosts discuss BSD and related news. I’m taking over from Benedict Reuschling whom I had the pleasure of meeting a few times at various AsiaBSDCon events.
This episode was a bit of an interview between me and Jason Tubnor who hails just south of me here in Melbourne. It was a lot of fun talking about our shared experiences with BSD, how I got started, and why I use and prefer NetBSD and FreeBSD over almost anything else thesedays.
BSD Now comes out each week, and you’ll likely be hearing from me a couple of times a month. Check them out, and don’t forget to support the show as well if you can. Cheers!
By Ruben Schade in Sydney, 2025-11-15.
My 2018 Mac mini (64G RAM, 2T SSD) has long been a trusty multi-VM pkgsrc and notqmail build machine, mostly via SSH. And during the first couple COVID years when I was still consulting independently but we were out of the country, it was also a trusty low-latency system for collaborative coding sessions with USA-based clients, mostly via screen sharing.
The mini still performs quite well. I still rely on it for keeping my NetBSD VPS running on the latest -current pkgsrc every week or so. But macOS NFS service had a tendency to be a source of annoyance and/or effort on each new major release. NetBSD’s NFS client got fixed, which was enough to get me by, but my Tribblix and Linux VMs had already been basically unusable for a while. And macOS had lately gotten a little unstable after reboot: sometimes misrendering the login screen, freezing after a correctly entered password, or suddenly pegging the fans for no apparent reason and powering abruptly off. So when macOS Tahoe dropped support for nearly all Intel Macs, I was already game to repave mine.
I generally prefer NetBSD when possible, and generally consider my non-NetBSD systems to be only temporarily so (e.g., Small ARMs).
Hosting a pile of nvmm-accelerated VMs while also building natively for my primary NetBSD production target would have been a solid use case.
But the 2018 mini has a T2 security chip that makes a bunch of basic onboard devices relatively difficult for an OS to attach, and Linux appears to be the only free OS that mostly deals with this.
Even then, we’ll need a T2-customized installer and special attention.
$ cd ~/Downloads
$ bash <<<1 <(curl -sL https://github.com/t2linux/T2-Mint/releases/latest/download/iso.sh)
$ sudo dd if=linuxmint-*-cinnamon-*-t2-*.iso of=/dev/$YOUR_USB_STICK
$ rm -f linuxmint-*-cinnamon-*-t2-*.iso
No Security.Allow booting from external or removable media.$ for i in \
"mklabel gpt" \
"mkpart ESP fat32 1MiB 513MiB" \
"set 1 esp on" \
"set 1 boot on" \
"mkpart Root btrfs 513MiB 100%"; do
sudo parted $YOUR_DISK_DEVICE $i
done
$ sudo mkfs.fat -F32 -n ESP ${YOUR_DISK_DEVICE}p1
btrfs journaling file system.[x]./.$ for i in proc dev dev/pts; do
sudo mount -B /$i /target/$i
done
$ sudo chroot /target
git: # echo | apt install etckeeper
# cd /etc
# git branch -m pet-power-plant
# git gc --prune
grub: # echo 'GRUB_RECORDFAIL_TIMEOUT=0' > default/grub.d/60_skip_grub_prompt.cfg
# etckeeper commit -m 'Skip grub prompt.'
# update-grub
# apt install libarchive-tools
# curl -sL https://master.dl.sourceforge.net/project/mac-icns/mac-icns.iso \
| bsdtar -xOf- iconverticons.com/os_linuxmint.icns \
> /boot/efi/.VolumeIcon.icns
# diskutil list
# diskutil mount /dev/$YOUR_EFI_SYSTEM_PARTITION_DEVICE
# bless --folder /Volumes/ESP/EFI/BOOT --label "Linux Mint"
grub prompt, just straight through the Mint logo to the login screen.sudo: $ echo '%sudo ALL=(ALL: ALL) NOPASSWD: ALL' \
| sudo tee /etc/sudoers.d/10sudo_nopasswd >/dev/null
$ sudo chmod 440 /etc/sudoers.d/10sudo_nopasswd
$ sudo etckeeper commit -m 'Skip sudo password prompt.'
$ boltctl list # find your device's UUID
$ sudo boltctl enroll --policy auto $YOUR_THUNDERBOLT_UUID
$ sudo apt install dmg2img
$ echo 7 | sudo get-apple-firmware get_from_online
$ echo | sudo apt install t2fanrd openssh-server
$ sudo systemctl enable --now ssh
$ sudo etckeeper commit -m 'Enable sshd.'
$ echo y | sudo fwupdmgr get-updates
$ echo | sudo apt install tmux vim myrepos tig silversearcher-ag qemu-system-x86-64 kdeconnect dropbox
I’ve got some older Mac minis that may also soon find gainful employment around here.
NetBSD is one of my favourite operating systems. It runs on almost everything, and skills learned on one architecture are largely transferable. Between that and its modest system requirements (at least when compared to a modern Linux distro) mean I’ve only ever run it on older kit, whether it be a ten-year old ThinkPad, or a 486.
My post yesterday about the incredible SilverStone FLP02 retro-inspired case included this throwaway line at the end:
Right now I can’t decide whether to turn this into my primary workstation, or move my FreeBSD jail and bhyve host into it, or my Alpine Xen box, or live out my dream of building a new NetBSD NVMM test host so I can finally retire the HP Microserver.
This lead me to think… what would a new NetBSD machine look like? Well, new by my standards means “second hand, and released within the last few years”, but you get my point.
I’m thinking something like this:
A new(ish) Ryzen CPU with SVM and a decent core count for running NVMM-accelerated guests, emulated QEMU guests (cough Alpha, cough SPARC), and my various chroot’ed services. Aaah that would be so cool.
ECC memory would be a plus.
Integrated graphics, probably. The proprietary (sigh) Nvidia graphics work easier on FreeBSD than Linux, but we don’t have that on the orange flag side of the fence. I wouldn’t be running games on this anyway, except for the important text-based ones.
NVMe for boot and primary storage with lvm and cgd, and maybe a pair of 2.5-inch SSDs for scratch. I’ve never actually messed with RAIDframe, but I want to give it a try. I’m also not sure what the state of ZFS is on NetBSD, but could be another thing to mess with.
A nice, supported discrete sound card like an Audigy Fx, for no other reason than I miss having discrete sound cards.
A supported 10 GbE (or at least a 2.5 GbE) NIC for testing/tuning, though I’d be unlikely to max this out.
My HP Microserver Gen8 box has been a faithful NetBSD tinkering box for almost ten years now, but even that was kinda old when I first got it. I’m intrigued to see what running NetBSD on newer hardware would be like in 2025.
By Ruben Schade in Sydney, 2025-11-12.
In February I talked about boring being a feature, which ruffled more than a few feathers on the usual news aggregator sites; one of whom called me a “fedora-wearing netbsd idiot” which I still intend to print on a business card one day.
Anyway, Linus Torvalds, regarding the latest Linux 6.18 release candidate:
Things remain calm and small, and everything looks pretty normal. […] In other words: it all looks just the way I like it at this point: small and boring."
Turns out this isn’t a difficult concept to grasp after all.
By Ruben Schade in Sydney, 2025-11-10.
This report was written by Vasyl Lanko as part of Google Summer of Code 2025.
As of the time of writing, there is no real sandboxing technique available to NetBSD. There is chroot, which can be considered a weak sandbox because it modifies the root directory of the process, effectively restricting the process' view of the file system, but it doesn't isolate anything else, so all networking, IPC, and mounts inside this restricted file system are the same as of the system, and are accessible.
There has already been some research on implementing kernel-level isolation in NetBSD with tools like gaols, mult and netbsd-sandbox, but they haven't been merged to NetBSD. Other operating systems have their own ways to isolate programs, FreeBSD has jails, and Linux has namespaces.
The goal of this project is to bring a new way of sandboxing to NetBSD. More specifically, we want to implement a mechanism like Linux namespaces. These namespaces allow the isolation of parts of the system from a namespace, or, as the user sees it, from an application.
NetBSD has compat_linux to run Linux binaries on NetBSD systems, and the implementation of namespaces can also be utilized to emulate namespace-related functionality of Linux binaries.
A simple example to visualize our intended result is to consider an application running under an isolated UTS namespace that modifies the hostname. From the system's view, the hostname remains the same old hostname, but from the application's view it sees the modified hostname.
Linux has 8 namespace types, in this project we will focus on only 2 of them:
Linux creates namespaces via the unshare or clone system calls, and it will also be our way of calling the namespace creation logic.
We setup the base for implementing Linux namespaces in the NetBSD kernel using kauth, the subsystem managing all authorization requests inside the kernel. It associates credentials with objects, and because the namespace lifecycle management is related to the credential lifecycle it handles all the credential inheritance and reference counting for us. (Thanks kauth devs!)
We separate the implementation of each namespace in a different secmodel, resulting in a similar framework to Linux which allows the isolation of a single namespace type. Our implementation also allows users to pick whether they want to have namespace support, and of what kind, via compilation flags, just like in Linux.
UTS stands for UNIX Timesharing System, because it allows multiple users to share a single computer system. Isolating the utsname can be useful to give users the illusion that they have control over the system's hostname, and also, for example, to give different hostnames to virtual servers.
The UTS namespace stores the namespace's hostname, domain name, and their lengths. To isolate the utsname we need to first create a copy of the current UTS information, plus we need a variable containing the number of credentials referencing this namespace, or, in simpler terms, the reference count of this namespace.
This namespace specific information needs to be saved somewhere, and for that we use the credential's private_data field, so we can use a UTS_key to save and retrieve UTS related information from the secmodel. The key specifies the type of information we want to retrieve from the private_data, hence using a UTS_key for the UTS namespace. The key for each namespace is a fixed value (we don't create a new key for every credential), but the retrieved value for that key from different credentials may be different.
We had to modify kernel code that was directly accessing the hostname and domainname variables, to instead call get_uts(), which retrieves the UTS struct for the namespace of the calling process. We didn't modify occurrences in kernel drivers because drivers are not part of any namespace, so they should still access the system's resources directly.
The MNT namespace isolates mounts across namespaces. It is used to have different versions of mounted filesystems across namespaces, meaning a user inside a mount namespace can mount and unmount whatever they want without affecting or even breaking the system.
The mount namespace structure in Linux is fairly complicated. To have something similar in NetBSD we need to be able to control the mounts accessed by each namespace, and for that we need to control what is each namespace's mountlist, this is also enough for unmounting file systems, because in practice we can just hide them.
For the mount_namespace, mountlist structure and the number of credentials using the mount namespace are stored in the credential's private data with the MNT_key. Similarly to the UTS namespace, we had to modify kernel code to not directly access the mountlist, but instead go through a wrapper called get_mountlist() which returns the correct mountlist for the namespace the calling process resides in.
Implementation for the mount namespace is immensely more complex than for the UTS namespace, it involves having a good understanding of both Linux and NetBSD behaviour, and I would frequently find myself wondering how to implement something after reading the Linux man pages, which would lead to me looking for it in the Linux source code, understanding it, then going back to NetBSD source code, trying to implement it, and seeing it's too different to implement in the same way.
You can find all code written during this project in GitHub at maksymlanko/netbsd-src gsoc-bubblewrap branch. Because I intend to continue this work outside of GSoC, I want to reinforce that this was the last commit still during GSoC on gsoc-bubblewrap branch and this was the last one for the mnt_ns still WIP branch.
The link includes implementation of general namespace code via secmodels, implementation of the UTS namespace and related ATF-tests, and the work-in-progress implementation of mount namespaces.
The mount namespace functionality is not finished as it would require much more work than the time available for this project. To complete it, it would be required invasive and non-trivial changes to the original source code, and, of course, more time.
As previously mentioned, Linux has 8 namespace types, it is important to see which of the missing namespaces are considered useful and feasible to implement.
I believe that after mount namespaces it would be interesting to implement PID namespaces as this in combination with mount namespaces would permit process isolation from this sandbox. Afterwards, implementing user namespaces would allow users to get capabilities similar to root in the namespace, giving them sudo permissions while still restricting system-wide actions like shutting down the machine.
A lower hanging fruit is to implement the namespace management functionality, which in Linux is lsns to list existing namespaces, and setns to move the current process to an already existing namespace.
In the end, Linux and NetBSD are different operating systems, implemented in different ways. Linux is complex and it is not trivial to port namespaces to NetBSD.
The project is called "Using bubblewrap to add sandboxing to NetBSD" and was initially projected to emulate the unshare system call into compat_linux, but, seeing that having namespaces could be useful for NetBSD, and that it would be easy to add to compat_linux afterwards, we decided to instead implement namespaces directly in the NetBSD kernel. Implementing other system calls necessary to make the bwrap linux binary work correctly also wouldn't be as satisfying as implementing namespaces directly into NetBSD, so this was why the project was initially called "Using bubblewrap to add sandboxing to NetBSD" but nowadays it would be more accurate to call it "Sandboxing in NetBSD with Linux-like namespaces".
I am very grateful to Google for Google Summer of Code, because without it I wouldn't have learned so much this summer, wouldn't have met with smart and interesting people, and for sure wouldn't have tried to contribute to a project like NetBSD, even if I always wanted to write operating systems code... But, the biggest thing I will take with me from this project is the confidence to be able to contribute to NetBSD and other open source projects.
I would also like to thank the members of the NetBSD organization for helping me throughout this project, and more specifically:
LD_LIBRARY_PATH bug I had on my system which wouldn't let me finish compiling NetBSD, and general GSoC recomendations.tech-kern, with whom I discussed ideas for projects and proposal suggestions, and in the end inspired the namespaces project.I love Dan cases for small loungeroom desktops, but SilverStone make my favourite kit for homelab servers and workstations. The manufacturer made massive waves at Computex in Taipei this year with their second retro-inspired case, the FLP02. Tom’s Hardware and GamersNexus had the best reviews I saw of it. Dullards spammed comment boxes decrying they “DON’T SEE THE POINT!!one!1!”, which is always a sign you’re onto something good.
As a hopeless retro tragic, I knew I had little choice but to place a preorder a few months ago. It arrived yesterday, and I’m so excited to blog about it I didn’t get time to clean the study or even move the half-assembled IKEA furniture out of the way!

I still can’t believe this exists. No really, my inner child is bouncing up and down so hard he risks breaking something. This is so shockingly the case of my dreams it’s damned near terrifying. This is the box, for a case, in 2025! Yes I know the puritans pointed out that nobody sported three 5.25-inch disk drives, but shut up that’s why.
Let’s get it out of the box:

Having spend the last few weeks trying to rearrange a real 1980s AT clone desktop that weighs as much as a fully-laden house boat, this case is feather light. Sturdy and solid like all SilverStone kit, but light. I can already see they nailed the beige aesthetic and colour, even with all the protective packaging.
Okay, time for the big reveal:

Wowza! Do people say that anymore? Did they ever? This is, in the most unironic tone I can muster, the single best modern computer case I think I’ve ever seen. I adore my Mini-ITX cases, but this thing is HUGE, and BEIGE, and just so damn well executed! I’m floored by the build quality.
Let’s take the protective tape off and see how this fan grill assembly works:

Pretty nice. There are two 120 mm fans in the front, and one in the rear. The front panel pops off and slides down, with a built in dust filter that would be easy to clean. This is an upgrade from my old (but still beloved) Antec 300 which requires the whole front of the case to be popped off to access this.
Moving up the front we can see the hidden panel at the top which pops down for additional IO, and those amazing front buttons that power on the machine and adjust the fan controller speed. The CPU turbo LEDs even report the controller fan speed, which I can’t wait to try.

The only downgrade from the FLP01 is that the fake drive panels don’t pop down anymore, they’re just covers. That’s fine; I fully intend to populate them with a beige LG Blu-ray burner I bought in Japan, a Zip Drive, and 360 KiB TEAC 5.25-drive for imaging and transferring data to old machines.
Looking inside, we see even the interior surfaces have been powdercoated that glorious beige; something most of us didn’t have at the time. You can see the two included 3.5-inch drive trays, and the space for the three 5.25-inch drives (or four if you remove the fan controller). To the left we have seven full-width ISA slots (see what I did there?) and two vertical. It has modern conveniences like rubber side wall grommets for cable management, and a shroud for the power supply. It also has a bracket for sagging GPUs, which can thankfully be removed:

I’m sure the regular PC case reviewers will eventually upload their reviews with far more detail and better lighting than our messy study at present. But what I can do that might be a bit unique is compare it to a real beige tower from the tail end of the era. Here it is alongside the 1999 Dell Dimension 4100, and the Sanyo MPC-880 off to the side:

As you can see, the FLP02 is significantly taller, wider, and deeper. The side “grills” mask the added width the case extends out from those 5.25-inch drive bays. It would absolutely dwarf my HP Brio BA600, Japanese NEC APEX, or even the IBM Aptiva 2199. I think the trade-off in size is worth it for those modern conveniences like cable management, though it’s something to keep in mind if you intend to stand it alongside the machines upon which it was loosely modelled.
In subsequent posts I’ll compare the FLP02 to the FLP01 in internals and design, and share my ideas for the build that will be going into this machine. Right now I can’t decide whether to turn this into my primary workstation, or move my FreeBSD jail and bhyve host into it, or my Alpine Xen box, or live out my dream of building a new NetBSD NVMM test host so I can finally retire the HP Microserver. We’ll see!
By Ruben Schade in Sydney, 2025-11-06.
I just came back home from Google Summer of Code 2025 Mentor Summit. We were 185 mentors from 133 organizations and it was amazing!
Google Summer of Code (GSoC) is a program organized by Google with the focus to bring new developers to open source projects.
The NetBSD Foundation has been participating in GSoC since 2005.
After nearly a decade being part of GSoC for The NetBSD Foundation, first as student and then as mentor and org admin, I finally attended my first GSoC Mentor Summit! That was a fantastic, very intense and fun experience! I met with a lot of new folks and learned about a lot of other cool open source projects.
Let's share my travel notes!
Going to Munich is relatively doable by train from my hometown. I departed from Urbisaglia-Sforzacosta at 6:59 in the morning. I had around 25 minutes to wait for the change from Ancona to Bologna. I arrived in Bologna at around 11:30 where I met Andrea, my friend and favorite music pusher since childhood. We had lunch together, eating tasty miso veggie ramen, drank some hot sake and then we had coffee. He then accompanied me back to the station where I had the train to Munich Central Station at 13:50.
The scenery from the train was really nice. Near Trento and Bolzano/Bozen, full of vineyards and apple orchards with mountains in the background. It was cloudy for most of the travel but starting from Bressanone/Brixen I began to see «beautiful blue skies and golden sunshine». After Bressanone the scenery was more uncontaminated with light green grazing lands. Unfortunately when reaching Brennero/Brenner (last Italian city before Austria) it started to get dark and I had not enjoyed the rest of the scenery in Austria and Germany. I arrived in Munich at 20:50 and checked in at my hotel which was around 1km from the station.
For this journey I was not alone! Also Christoph Badura (<bad@>) was
a delegate for Google Summer of Code and we had been in touch to get
dinner and beers together. Christoph had some train delays but at 21:40
we were able to meet and went for a walk a bit to the south-east to find
some places to eat and drink.
I had done my homework for the beer places (obviously!) but the place
in my TODO list to visit on Wednesday did not have a lot of food so we
decided to first go to a restaurant and we found Ha Veggie -
Vietnamese Cuisine. I had some
Edamame and Bò xào sả
ớt, a delicious dish
with seitan, vegetables, lemongrass and chili pepper.
We then stopped at Frisches Bier, a bit too late, but the publican was kind enough and she permitted us a last round. I had a pint of a refreshing Hoppebräu Wuide Hehna Session IPA.
We then took a walk back to our hotels, talked a bit and went to sleep.
Thursday was the first day of the Mentor Summit. The summit was in Munich Marriott Hotel, more in the north of the city, around 5km from the central station.
I checked out of my hotel and walked to the city center in Marienplatz and nearby. I also stopped in a couple of shops to grab some souvenirs for my family and friends and then took a long walk in the direction of Munich Marriott Hotel, to hopefully be there at 13:00 sharp for the start of check-in of GSoC Mentor Summit.
Rear of the Siegestor showing its inscription that can be translated to "Dedicated to victory, destroyed by war, urging peace"
I walked through Ludwigstraße and enjoyed the architecture around me, walking near LMU University and Siegestor. I then proceeded through Leopoldstraße and Ungererstraße and then arrived to the Munich Marriott Hotel.
Chris was already in the lobby and he had already checked in. We talked a bit and then I checked in as well. The room was huge and comfy! I quickly went back down to the lobby. We then checked in for Mentor Summit and I finally met Stephanie, Mary and Lucy, the GSoC Program Admins. I also took with me from home some classical and specialty Italian chocolate (Cioccolato di Modica) for the chocolate room (more about that later!) and left the bars in the chocolate room.
The time from 13:00-17:00 was reserved to actually permit mentors to arrive. At 13:00 there were still not a lot of mentors around so with Chris we decided to have lunch. We had lunch in a trattoria where we had an antipasto of grilled vegetables, penne all'arrabbiata with red wine from Montepulciano.
While eating Chris talked about the Tantris but we were already full. We had not tasted the haute cuisine but walked there to just look at the restaurant building. O:)
Tables full of chocolate and sweets from the chocolate room
When we came back to Munich Marriott Hotel, I went to the chocolate room to taste some chocolate/sweets. In GSoC Mentor Summit it is a tradition to bring great quality chocolate - or other sweets for places where chocolate is less usual - so folks can taste sweets from all over the world. That's a very nice initiative! I was curious more about non-chocolate sweets completely new to me so I had some Laddu and Kaju katli, both delicious!
I spent the rest of the afternoon down at the Champions Bar socializing with other mentors.
We had dinner at around 19:00 with good food accompanied by a couple of glasses of Primitivo.
At 20:15 we had the Opening Session. Stephanie, Mary and Robert warmly welcomed us. They shared the schedule and introduced to the unconference format of the sessions. We then had dessert and played the GSoC 2025 Mentor Summit Scavenger Hunt. The Scavenger Hunt is a game where you can meet and find 25 different folks with something that could be common (e.g. «prefers spaces (not tabs)») to something pretty rare (e.g. «has jumped out of a helicopter»). This game was nice because it was also a great conversation starter. I met a lot of mentors both of open source software that I regularly use but also learned about new interesting open source software and organizations while doing that!
We had time until Friday 12:30 and 10 lucky mentors who completed it (at the end around 60 of 185 were able to complete) got randomly selected and they got special prizes.
I stayed up until probably 1am or so, socializing a bit more in the lobby and then went back to my room to have some sleep, knowing that Friday was completely packed with Lightning talks and sessions!
I had breakfast around 8:20 at the Green's Restaurant. I sat at a
table together with other folks and after a minute I saw a known
name in front of me: Lourival Pereira Vieira Neto <lneto@>! I was
very happy to meet another NetBSD developer and that was a complete
surprise. He was there as a mentor of the
LabLua organization.
GSoC Program Admins welcomed us for the day and recapped the schedule for Friday and Saturday.
The lightning talks consisted of mentors presenting their best GSoC 2025 projects. The format was fast and fun: a maximum of 3 minutes for the talk and a maximum of 4 slides! We had presentations from 18 different mentors and orgs and all of them were able to stay under 3 minutes!
That was a great occasion also to learn about open source projects, new orgs and the experiences shared were interesting too.
After the 1st round of Lightning Talks I attended the GSoC Feedback Session. That was a Q&A session with program admins and org admins/mentors.
Hot topics this year were AI usage and spam applications that were not discussed as part of this session because there were two other separate sessions regarding that later.
If I only have one sentence to summarize this session... I should quote Robert sharing that Google Summer of Code is about the journey for the contributors and mentors to get involved in open source. The coding is only the means to an end.
After the first session I decided to take a break and instead stay in the "hallway track" where I met new folks and socialized a bit. Another funny and at the same time a bit embarrassing for me thing of GSoC is that I often met someone and after a couple of minutes of talk I can associate the face with a name and I figured out that I'm an avid user / pkgsrc MAINTAINER of the software they contribute to! :)
At 12:30 we had lunch at Green's Restaurant and then at 13:40 we had a group photo and it was pretty tricky to put around 200 folks (program admins and mentors) on the stage of the Ballroom! :)
In the afternoon I joined the session about improving diversity. In open source unfortunately there are a lot of underrepresented groups and we should fix that.
There were a lot of experiences shared from several orgs, food for thought for me! Only to name few topics: Outreachy, how to know and create safe spaces, importance of localization in software and documentation, be sure to make underrepresented folk as part of key people and also try to take the burden of other tasks off them.
Artificial Intelligence (AI), in particular Generative AI (GenAI) has been a hot topic since project proposals opened this year!
Some people consider it a speed-up for researching but at the same time it impedes learning.
In NetBSD - according to our Commit Guidelines - code generated by large language model (LLM) or similar technologies is considered tainted code because such models can be trained on copyrighted materials and such resulting code can then violate copyright.
More than 80% of GSoC contributors who filled an anonymous survey used AI, mainly for code generation, code completion, text generation, debugging and error detection.
Most mentors are usually not happy with the outcomes of AI with code often resulting in buggy/vulnerable and poor quality, violating copyright and some mentors also pointed out that as part of mentoring we should also make contributors aware of environmental/ecological impact of such use.
However, both contributors' and mentors' surveys on AI are relatively small dataset (around 90 mentors and 90 contributors).
At 16:00 we had the 2nd and last round of Lightning talks. That was another great opportunity to learn more about more projects and organizations!
Christoph Badura (<bad@>) presenting his lightning talk
Christoph Badura (<bad@>) did a lightning talk too and he presented
work done by Ethan Miller. Ethan also blogged about his work, please
read Google Summer of Code 2025 Reports: Asynchronous I/O
Framework
if you missed it!
This code was also imported by Christos Zoulas (<christos@>), thanks
Christos!, and is now part of -current and it will be in NetBSD
12.0.
After the Lightning talks there was a break and then at 17:30 I joined the session about GSoC spammy proposals.
This year most organizations received many more proposals, mostly due contributors starting to massively use GenAI.
A lot of suggestions and tips were shared to make the mentor review job smooth and easy as possible.
The most important suggestion is that mentors must do a 1:1 conversation with potential contributors before accepting them. The weight of the project proposal is like 2/10 and the actual 8/10 weight is on conversations between mentors and contributor.
Around 19:00 we had dinner, desserts and socialized. Stephanie also did a final talk recapping GSoC 2025 and thanking all mentors for making that possible.
We then had drinks and it then started the karaoke session (and there were a lot of pro folks doing that, very nice!).
The karaoke session at Ballroom closed with the waiter singing Closing Time (not the one by Tom Waits that is mostly instrumental, but the one by Semisonics!, I did not know it but that's melancholic as well for me, just smells a little bit less of whisky and cigs compared to the Tom Waits one ;)).
We went downstairs to the Champions Bar, had two rounds of some good Higgins Ale Works IPA and socialized a bit more. Time passed pretty quickly and also the barman there at Champions Bar started singing Closing Time!
We went a bit outside and in the lobby talking with other mentors and then I went back to my room to get some sleep for the last day of the summit.
I had breakfast around 8:00, a bit earlier, given that on Saturday the first session started at 9:00.
At 9:00 I joined a session about porting and packaging. We had both FreeBSD porters, pkgsrc maintainers and other package systems maintainers on one side. On the other side there were also a lot of upstreams.
We shared do-s and don't-s on packaging.
Christoph Badura (<bad@>) proposed a session to share
experiences on how to get contributors to engage with the community and
a lot of mentors provided a lot of great suggestions.
One thing that most mentors agreed on and worked well was to invite contributors regular (or less regular, to avoid putting too much pressure on contributors) blog posts / status updates.
Some organizations also did that as part of their weekly / bi-weekly updates that are often video meetings. In that case they reserved a slot for the contributor so that they can share their status updates.
These are great opportunities for the contributor to get in touch with the community.
I then joined a session about open source tools for supply chain security.
We discussed about Software Bill of Materials (SBOM) and its importance in the context of regulations like EU Cyber Resilience Act (CRA).
We also discussed Common Platform Enumeration (CPE) and Package URL (PURL) schemas
We talked about vulnerability management and I
shared a bit my experience in pkgsrc-security@ and how often the
metadata in CVEs (like vendor, product and versions affected) is not
that good. Most package systems have their own workflows and usually
add extra metadata only for their vulnerability DB.
I also learned about Vulnerability Exploitability eXchange (VEX) that some package systems use.
There were also mentors from AboutCode and SW360, projects that looks very interesting and I should learn more about them!
Before lunch I joined the Vintage computing session.
Everyone presented themselves and talked about the most vintage computer they had, how running old machines is both fun and productive and we also talked about old Unix-es and The Unix Heritage Society.
I had lunch at Green's Restaurant with other mentors and then socialized a bit outside Ballroom.
At 14:00 we had the last sessions. I went to the waitlisted Lightning talk, that can be considered the Lightning talk, round 3! :)
Mentors from different organizations shared interesting projects.
After the lightning talks several of us shared funny stories/hacks. Like learning languages in interesting ways, taking photographs of garbage to train and realize a robot that cleans it, a sort of Tinder for food... And much more! :)
At 15:30 we had the closing session.
Some of us stayed for another 1 or 2 hours and talked, socialized a bit more and said see you soon to each other.
Around 17:00 I left the Munich Marriott hotel to check in to my hotel for Saturday night near Munich East Station. It was raining for most of the morning and afternoon but luckily around 17:00 it stopped and the sky seemed fine. I decided to take a walk - a bit more than 6km - to reach the hotel. Also that time I'm happy that I took a long walk because I was able to stay for most of my walk in the Englischer Garten.
Glass of Camba Island (NEIPA) beer and list of draft beers
I checked in at my hotel and then went to Tap-House where with Christoph we planned to have dinner and a couple of beers. Tap-House had a really huge choice of craft beers with 40 drafts! We had a pinsa marinara, a flatbread similar to pizza and I had a couple of small IPAs from Camba Bavaria and Yankee & Kraut Pure HBC 630, a DNEIPA.
We decided to take a walk and went to BrewsLi for some last good night beers. BrewsLi was a very nice brew pub. We sat at a table and near us there were several board games. Chris took one of this board game Mensch ärgere Dich nicht and explained to me the rules. A lot of aleatory is involved but there is also some strategy and it was funny to play and we probably played for 40 minutes or so because most of our game pieces returned to the "out" section. I took a walk with Chris back to the nearest metro station and I came back to my hotel around 2:00.
Mensch ärgere Dich nicht board game
On Sunday I took a train from Munich East Station to Bolzano/Bozen because it was unfeasible to go back home by train to Marche region without a stop somewhere in Italy.
View of Bolzano from St. Oswald Promenade
I went for a walk Passeggiata di Sant'Osvaldo (St. Oswald Promenade) uphill to be able to enjoy a view of the city from the top until sunset.
I had a simple but very tasty onion soup and a Gose (really good, one of the best Gose I've drunk!) and Session IPA at Batzen Häusl.
I was not able to visit anything else in the night because I was pretty tired so I went to sleep earlier.
In the morning I took a walk to Walther Square and nearby. I got some souvenirs for the family and then I took the trains back home and spent most of my day on the trains.
Google Summer of Code 2025 Mentor Summit was an amazing experience!
I had the chance to participate in very interesting talks, sessions and discussions. I met a lot of mentors from all over the world and learned about new open source projects and organizations. All the folks were also extremely positive, easy to talk to and I had a lot of fun.
Thanks to program admins and all the mentors who made this possible! Thanks a lot also to Google for organizing it and thanks to The NetBSD Foundation that permitted me to go!
NetBSD hand-written logo on the GSoC guest book
If you are new to open source, consider applying for it! If you are a seasoned open source contributor, consider participating as a mentor! You can learn more about GSoC at Google Summer of Code (GSoC) website.
Please forgive the fluffy nature of this post. If you’re expecting a technical discussion, perhaps look through the archives instead today.
On Mastodon I talked about how I enjoyed being in the BSD, OpenZFS, and illumos communities, even if I only consider myself a lurker most of the time. I don’t know how else to say this, but… I really mean that. More than I could express.
These systems, and the people behind them, are the reason why I can sit here and write this post. Not just in a technical sense, but in a “why I get up in the morning” sense. It’s that stark.
When the broader industry seems hellbent on installing the Torment Nexus, these communities sit on the periphery, quietly designing, developing, documenting, maintaining, patching, porting, upgrading, testing, integrating, and discussing the tools that make my life better, and the tools I can recommend without question. Ditto for all the people who organise events, coordinate projects, manage sponsorships, mediate discussions, and engage with the public. This is all a tremendous amount of work which so often goes unacknowledged, let alone recognised.
I come home after a day of fighting with crappy systems that are ill conceived, badly designed, overpriced, inflexible, and full of hostile dark patterns that would shoot me in the back to make a line go up, and I get to use thoughtfully written, well implemented software that doesn’t chase the shiny. The fact such small teams (relatively speaking) are responsible for software that is better, easier, and more open that companies with orders of magnitude more resources is shocking to me.
I’ve only managed to meet a dozen or so of you at events or over the phone, but you know who you are. You should also know that you’ve had an oversized impact on my life. It’s cool when I read a man(1) page and your name is there, or when I log into a box and I see your handle at the end of a fortune(1).
I guess I wanted to say, in my usual roundabout way, thank you for all you do, and for entertaining my presence too. I’ll continue to work on ways I can contribute back.
By Ruben Schade in Sydney, 2025-10-30.
This report was written by Dennis Onyeka as part of Google Summer of Code 2025.
This is the 2nd blog post about his work. If you have missed the first blog post please read Google Summer of Code 2025 Reports: Enhancing Support for NAT64 Protocol Translation in NetBSD.
Typical rules looks like:
map wm0 algo "nat64" 64:ff9b:2a4:: -> 192.0.2.33
map wm0 algo nat64 plen 96 64:ff9b::8c52:7903 <- 140.82.121.3
This tells NPF to translate outgoing IPv6 packets using the prefix
64:ff9b:2a4::/96, rewriting them to use the IPv4 address
192.0.2.33. When the packet returns and hits NPF, it
changes source from GitHub's IPv4 to GitHub's IPv6 address and then it
rewrites the header.
npf_nat64_rwrheader()) rewrites the IPv6 header to an IPv4 header.
github.com extracted from IPv4 embedded IPv6 address defined in the rule configuration. (e.g. 140.82.121.3 from 64:ff9b::8c52:7903).in4_cksum() or in6_cksum().npf_embed_ipv4()npf_nat64_rwrheader() routines).npf.conf(5) syntax and parser to accept NAT64 configuration parameters.ping, curl and dig, observing packets using tcpdump and Wireshark.This project successfully integrates NAT64 along with a separate DNS64 configuration into NPF, enabling IPv6-only clients to reach IPv4-only servers through seamless translation. Although there's a need for additional changes and implementation.
This is indeed the end of my GSoC Program, it was indeed an exciting moment working with system developers and certainly I'd be an active contributor to the NetBSD codebase.
Source code of the Google Summer of Code project can be found at the following branch.
I've just installed virt-manager with pkgin on NetBSD 9.2 just because I want to emulate the virtual machines with qemu + nvmm on NetBSD 9.2. The installation of virt-manager went ok. But,when I ran it,an error came up :
netbsd-marietto# virt-manager
Traceback (most recent call last):
File "/usr/pkg/share/virt-manager/virt-manager.py", line 386, in <module>
main()
File "/usr/pkg/share/virt-manager/virt-manager.py", line 247, in main
from virtManager import cli
File "/usr/pkg/share/virt-manager/virtManager/cli.py", line 29, in <module>
import libvirt
ImportError: No module named libvirt
Googling a little bit maybe I've found the solution here :
https://www.unitedbsd.com/d/285-linux-user-and-netbsd-enthusiast-hoping-to-migrate-some-day
where "kim" said :
Looking at pkgsrc/sysutils/libvirt/PLIST it doesn't look like the package provides any Python bindings -- which is what the "ImportError: No module named libvirt" error message is about. You could try py-libvirt from pkgsrc-wip and see how that works out.
I tried to start the compilation like this :
netbsd-marietto# cd /home/mario/Desktop/pkgsrc-wip/py-libvirt
netbsd-marietto# make
but I've got this error :
make: "/home/mario/Desktop/pkgsrc-wip/py-libvirt/Makefile" line 15: Could not find ../../wip/libvirt/buildlink3.mk
make: "/home/mario/Desktop/pkgsrc-wip/py-libvirt/Makefile" line 16: Could not find ../../lang/python/distutils.mk
make: "/home/mario/Desktop/pkgsrc-wip/py-libvirt/Makefile" line 17: Could not find ../../mk/bsd.pkg.mk
make: Fatal errors encountered -- cannot continue
If u want to see the content of the Makefile, it is :
gedit /home/mario/Desktop/pkgsrc-wip/py-libvirt/Makefile
#$NetBSD: Makefile,v 1.32 2018/11/30 09:59:40 adam Exp $
PKGNAME= ${PYPKGPREFIX}-${DISTNAME:S/-python//}
DISTNAME= libvirt-python-5.8.0
CATEGORIES= sysutils python
MASTER_SITES= https://libvirt.org/sources/python/
MAINTAINER= [email protected]
HOMEPAGE= https://libvirt.org/sources/python/
COMMENT= libvirt python library
LICENSE= gnu-lgpl-v2
USE_TOOLS+= pkg-config
.include "../../wip/libvirt/buildlink3.mk"
.include "../../lang/python/distutils.mk"
.include "../../mk/bsd.pkg.mk"
Can someone help me to fix the error ? very thanks.
I have a good mix today.
Your unrelated music video of the week: Return of the Phantom by VOID. 2025 or 1985? Can’t easily tell.
This report was written by Dennis Onyeka as part of Google Summer of Code 2025.
The goal of the NAT64 project is to implement IPv6-to-IPv4 translation inside NPF (NetBSD Packet Filter). NAT64 enables IPv6-only clients to communicate with IPv4-only servers by embedding/extracting IPv4 addresses in IPv6 addresses as per RFC 6052 and RFC 6145. We are using a 1:1 mapping for now, to implement NAT64 translation. Whereby an IPv6 host will use its IPv4 address to communicate with an IPv4 only server. As an example of IPv4, we will use github.com (140.82.121.3) that supports only IPv4. In order to enable NAT64 on NPF we will have a rule like this:
map wm0 algo "nat64" 64:ff9b:2a4:: -> 192.0.2.33
This means we want to use the host IPv4 address associated to wm0 interface 192.0.2.33 to access the public internet in order to communicate with GitHub's IPv4 server.
During this process, IPv6 header will be rewritten to IPv4. Part of the IP structure requires source and destination address so our new IPv4 source address will be the Host IPv4 address (which is likely to change during further improvement) and the IPv4 destination address will be gotten from GitHub’s IPv4 embedded IPv6 address i.e. the IPv4 address embedded into IPv6 address (64:ff9b::8c52:7903) gotten from the DNS resolver.
Note that NPF is our router so we will have to enable and configure a DNS caching resolver like unbound on the NetBSD machine. The job of the DNS64 is to synthesize AAAA responses from IPv4-only A records using the well-known prefix (64:ff9b::/96), then embed the Github’s IPv4 address (140.82.121.3) into it which will be (64:ff9b::8c52:7903). The hexadecimal values represent Github’s IPv4 address.
When the packet is returning from GitHub, it uses the IPv6 interface, so the IPv4 address part will be embedded back into its required position based on the prefix length passed by the user on the configuration. Then it will be sent to the IPv6 interface of the host machine.
So far, I’ve been focusing on the core translation path, making sure headers are rewritten correctly and transport checksums are updated. This also requires changes in the userland and kernel.
npf.conf, so users can specify NAT64 prefix length.ping, nc (netcat), curl, dig.tcpdump/tshark/Wireshark on NetBSD to inspect packets before/after translation.build.sh with kernel debug logs enabledktraceExperience: Working deep inside NetBSD’s kernel networking stack has been challenging but rewarding. It gave me hands-on experience with mbufs, packet parsing, and kernel-level checksums.
This year, 2025, the KDE Community held its yearly conference in Berlin, Germany. On the way I reinstalled FreeBSD on my Frame.work 13 laptop in another attempt to get KDE Plasma 6 Wayland working. Short story: yes, KDE Plasma 6 Wayland on FreeBSD works.
↫ Adriaan de Groot
Adriaan de Groot is a long-time KDE developer and FreeBSD package maintainer, and he’s published a short but detailed guide on setting up a KDE Plasma desktop on FreeBSD using Wayland instead of X11. With the Linux world slowly but finally leaving X11 behind, the BSD world really has little choice but to follow, especially if they want to continue offering the two major desktop environments. Most of KDE and GNOME are focused on Linux, and the BSDs have always kind of tagged along for the ride, and over the coming years that’s going to mean they’ll have to invest more in making Wayland run comfortably on BSD.
Of course, the other option would be the KDE and GNOME experience on the BSDs slowly degrading over time, but I think especially FreeBSD is keen to avoid that fate, while OpenBSD and NetBSD seem a bit more hands-off in the desktop space. FreeBSD is investing heavily in its usability as a desktop operating system, and that’s simply going to mean getting Wayland support up to snuff. Not only will KDE and GNOME slowly depend more and more on Wayland, Xorg itself will also become less maintained than it already is.
Sometimes, the current just takes you where it’s going.
I am a huge fan of my Rock 5 ITX+. It wraps an ATX power connector, a 4-pin Molex, PoE support, 32 GB of eMMC, front-panel USB 2.0, and two Gen 3×2 M.2 slots around a Rockchip 3588 SoC that can slot into any Mini-ITX case. Thing is, I never put it in a case because the microSD slot lives on the side of the board, and pulling the case out and removing the side panel to install a new OS got old with a quickness.
I originally wanted to rackmount the critter, but adding a deracking difficulty multiplier to the microSD slot minigame seemed a bit souls-like for my taste. So what am I going to do? Grab a microSD extender and hang that out the back? Nay! I’m going to neuralyze the SPI flash and install some Kelvin Timeline firmware that will allow me to boot and install generic ARM Linux images from USB.
↫ Interfacing Linux
Using EDK2 to add UEFI to an ARM board is awesome, as it solves some of the most annoying problems of these ARM boards: they require custom images specifically prepared for the board in question. After flashing EDK2 to this board, you can just boot any ARM Linux distribution – or Windows, NetBSD, and so on – from USB and install it from there. There’s still a ton of catches, but it’s a clear improvement.
The funniest detail for sure, at least for this very specific board, is that the SPI flash is exposed as a block device, so you can just use, say the GNOME Disk Utility to flash any new firmware into it. The board in question is a Radxa ROCK 5 ITX+, and they’re not all that expensive, so I’m kind of tempted here. I’m not entirely sure what I’d need yet another computer for, honestly, but it’s not like that’s ever stopped any of us before.
This report was written by Ethan Miller as part of Google Summer of Code 2025.
The goal is to improve the capabilities of asynchronous IO within NetBSD. Originally the project espoused a model that pinned a single worker thread to each process. That thread would iterate over pending jobs and complete blocking IO. From this, the logical next step was to support an arbitrary number of worker threads. Each process now has a pool of workers recycled from a freelist, and jobs are grouped per-file so that we do not thrash multiple threads on the same vnode which would inevitably lock. This grouping also opens the door for future optimisations in concurrency. The guiding principle is to keep submission cheap, coalesce work sensibly, and only spawn threads when the kernel would otherwise block.
We pin what is referred to as a service pool to each process, with each service pool capable of spawning and managing service threads. When a job is enqueued it is distributed to its respective service thread. For regular files we coalesce jobs that act on the same vnode into one thread. If we fall back to the synchronous IO path within the kernel it would lock anyway, but this approach is prudent because if more advanced concurrency optimisations such as VFS bypass are implemented later this is precisely the model that would be required. At present, since that solution is not yet in place, all IO falls back to the synchronous pipeline. Even so there are performance gains when working with different files, since synchronous IO can still run on separate vnodes at the same time.
Through the traditional VFS read/write path, requests eventually reach
bread/bwrite and block upon a cache miss until completion. This kills
concurrency. I considered a solution that bypassed the normal vnode
read/write path by translating file offsets to device LBAs with VOP_BMAP,
constructing block IO at the buffer and device layer, submitting with
B_ASYNC, and deferring the wait to the AIO layer with biodone bookkeeping
instead of calling biowait at submission. This keeps submission short and
releases higher level locks before any device wait. The assumptions are
that filesystem metadata is frequently accessed therefore cached so
VOP_BMAP usually does not block, that block pointers for an inode mostly
remain stable for existing data, and that truncation does not rewrite past
data. For the average case this would provide concurrency on the same file.
In practice, however, it was exceptionally difficult to implement because
the block layer lacks the necessary abstractions.
This is, however, exactly the solution espoused by FreeBSD, and they make it work well because struct bio is an IO token independent of the page and buffer cache. GEOM can split or clone a bio, queue them to devices, collect child completions, and run the parent callback. Record locks are treated as advisory so once a bio is in flight the block layer completes it even if the advisory state changes. NetBSD has no equivalent token. Struct buf is both a cache object and an IO token tied to UBC and drivers through biodone and biowait. For now the implementation of service pools and service threads lays the groundwork for asynchronous IO. Once the BIO layer reaches adequate maturity, integrating a bio-like abstraction will be straightforward and yield immediate improvements for concurrency on the same vnode. The logical next step is to design and port something comparable to FreeBSDs struct bio which would map very cleanly onto the current POSIX AIO framework.
My development setup is optimised for building and testing quickly. I use
scripts to cross-build the kernel and boot it under QEMU with a small FFS
root. The kernel boots directly with the QEMU option -kernel without any
supporting bootloader. Early on I tested against a custom init dropped onto
an FFS image. Now I do the same except init simply launches a shell which
allows me to run ATF tests without a full distribution. This makes it
possible to compile a new kernel and run tests within seconds.
One lesson I have taken away is that progress never happens overnight. It takes enormous effort to get even a few thousand lines of highly multi-threaded race-prone code to behave consistently under all conditions. Precision in implementation is absolutely required. My impression of NetBSD is that it is a fascinating project with an abundance of seemingly low-hanging fruit. In reality none of it is truly low-hanging or simple, but compared to Linux there remains a great deal of work to be done. It is not easy work but the problems are visible and the path forward is clearer.
I also want to note that I intend on providing long term support for this code in the case that any issues may arise.
The code written as part of this project can be found here.
I have a Raspberry Pi 3 with NetBSD 10, running CI jobs. Because SD cards are notoriously unreliable, I attached a USB hard drive to it. The HDD has a swap partition and scratch space for the builds, while root is on the SD. Unfortunately, some writes end up going to the root file system after all, which meant that the SD card was destroyed after only about a year!
Earlier this year, I was trying to get actual daily work done on HP-UX 11.11 (11i v1) running on HP’s last and greatest PA-RISC workstation, the HP c8000. After weeks of frustration caused first by outdated software no longer working properly with the modern web, and then by modern software no longer compiling on HP-UX 11.11, I decided to play the ace up my sleeve: NetBSD’s pkgsrc has support for HP-UX. Sadly, HP-UX is obviously not a main platform or even a point of interest for pkgsrc developers – as it should be, nobody uses this combination – so various incompatibilities and more modern requirements had snuck into pkgsrc, and I couldn’t get it to bootstrap. I made some minor progress here and there with the help of people far smarter than I, but in the end I just lacked the skills to progress any further.
This story will make it to OSNews in a more complete form, I promise.
Anyway, in May of this year, it seems Brian Robert Callahan was working on a very similar problem: getting pkgsrc to work properly on IBM’s AIX.
The state of packages on AIX genuinely surprises me. IBM hosts a repository of open source software for AIX. But it seems pretty sparse compared to what you could get with pkgsrc. Another website offering AIX packages seems quite old. I think pkgsrc would be a great way to bring modern packages to AIX.
I am not the first to think this. There are AIX 7.2 pkgsrc packages available at this repository, however all the packages are compiled as 32-bit RISC System/6000 objects. I would greatly prefer to have everything be 64-bit XCOFF objects, as we could do more with 64-bit programs. There also aren’t too many packages in that repository, so I think starting fresh is in our best interest.
As we shall see, this was not as straightforward as I would have hoped.
↫ Brian Robert Callahan
Reading through his journey getting pkgsrc to work properly on AIX, I can’t help but feel a bit better about myself not being to get it to work on HP-UX 11.11. Callahan was working with AIX 7.2 TL4, which was released in November 2019 and still actively supported by IBM on a maintained architecture, while I was working with HP-UX 11.11 (or 11i v1), which last got some updates in and around 2005, running on an architecture that’s well dead and buried. Looking at what Callahan still had to figure out and do, it’s not surprising someone with my lack of skill in this area couldn’t get it working.
I’m still hoping someone far smarter than I stumbles upon a HP c8000 and dives into getting pkgsrc to work on HP-UX, because I feel pkgsrc could turn an otherwise incredibly powerful HP c8000 from a strictly retro machine into something borderline usable in the modern world. HP-UX is much harder to virtualise – if it’s even possible at all – so real hardware is probably going to be required. The NetBSD people on Mastodon suggested I could possibly give remote access to my machine so someone could dive into this, which is something I’ll keep under consideration.
I have this code:
NSData* data = [[pipe fileHandleForReading] readDataToEndOfFile];
NSString* s = [[NSString alloc] initWithData: data encoding: NSUTF8StringEncoding];
(For context, this is part of a function that executes system commands.)
And this last line produces a warning:
Calling [GSPlaceholderString -initWithData:encoding:] with incorrect signature. Method has @28@0:8@16i24 (@28@0:8@16i24), selector has @28@0:8@16I24
The command works fine, and gives me the output I want, whereby I don't want to see this warning, because I am working on a console application, and these warnings severely disrupt the experience.
Is there any way to disable those warning messages? I tried to "#define NSLog(...)", but it didn't work.
I am using the GCC compiler on a NetBSD 10.0.
I'm trying to set up a VM running NetBSD 6.0 (very unfortunately I have to use this specific version). While I've managed to find the installer ISO very easily, I can't find a single working pkgin / pkgsrc mirror, seems that they've all been removed.
Is anyone aware of any remaining mirrors of that ancient versions, or some other method of installing software packages (mind that they need to be the same versions that shipped with NetBSD 6.0)?
We removed ads from OSNews. Donate to our fundraiser to ensure our future!
The Hurd, the collection of services that run atop the GNU Mach microkernel, has been in development for a very, very long time. The Hurd is intended to serve as the kernel for the GNU Project, but with the advent of Linux and its rapid rise in popularity, it’s the Linux kernel that became the defacto kernel for the GNU Project, a combination we’re supposed to refer to as GNU/Linux. Unless you run Alpine, of course. Or you run any other modern Linux distribution, which probably contains more non-GNU code than it does GNU code, but I digress.
The Hurd is still in development, however, and one of the more common ways to use The Hurd is by installing Debian GNU/Hurd, which combines the Debian package repositories with The Hurd. Debian GNU/Hurd 2025 was released yesterday, and brings quite a few large improvements and additions.
This is a snapshot of Debian “sid” at the time of the stable Debian “Trixie” release (August 2025), so it is mostly based on the same sources. It is not an official Debian release, but it is an official Debian GNU/Hurd port release.
↫ Samuel Thibault
About 72% of the Debian archive is available for Debian GNU/Hurd, for both i386 and amd64. This indeed means 64bit support is now available, which makes use of the userland disk drivers from NetBSD. Support for USB disks and CD-ROM was added, and the console now uses xkb for keyboard layout support. Bigger-ticket items are working SMP support and a port of the Rust programming language. Of course, there’s a ton more changes, fixes, improvements, and additions as well.
You can either install Debian GNU/Hurd using the Debian installer, or download a pre-installed disk image for use with, say, qemu.
The long-planned next meeting of NYCBUG is tomorrow. If you are going and have a Framework laptop, please bring it for testing HDMI. I assume it’s related to ongoing support work.
Meeting is canceled cause no presenter available.
pkg_admin audit said..
Package sqlite3-3.49.2 has a memory-corruption vulnerability, see https://nvd.nist.gov/vuln/detail/CVE-2025-6965
Package libxml2-2.14.4 has a use-after-free vulnerability, see https://nvd.nist.gov/vuln/detail/CVE-2025-49794
Package libxml2-2.14.4 has a denial-of-service vulnerability, see https://nvd.nist.gov/vuln/detail/CVE-2025-49795
Package libxml2-2.14.4 has a denial-of-service vulnerability, see https://nvd.nist.gov/vuln/detail/CVE-2025-49796
Package libxml2-2.14.4 has a integer-overflow vulnerability, see https://nvd.nist.gov/vuln/detail/CVE-2025-6021
Package libxml2-2.14.4 has a buffer-overflow vulnerability, see https://nvd.nist.gov/vuln/detail/CVE-2025-6170
I usually go to netbsd site, check errata and follow instruction but..
https://www.netbsd.org/support/security/patches-10.1.html
The page result empty. How to fix/patch those vulnerabilities?
I have also run
sysupgrade auto
and reboot, but messages still appear.
I preassembled this list of links over time, so some of them have probably changed. For the “I’m sorry…” link, that just means more material.
RPG mini-theme this week.
Your unrelated music link of the week: Beanbag metal.
So to set the stage (and it's a strange stage...) this is on NetBSD, using pkgsrc, and on hppa architecture. I build this the same way on sparc without issue.
On hppa, I get:
`_ZL17__gthread_triggerv' referenced in section `.rodata._ZN20cmBasicUVPipeIStreamIcSt11char_traitsIcEED1Ev.cst4' of cmBinUtilsMacOSMachOOToolGetRuntimeDependenciesTool.o: defined in discarded section `.text._ZL17__gthread_triggerv[_ZN20cmBasicUVPipeIStreamIcSt11char_traitsIcEED1Ev]' of cmBinUtilsMacOSMachOOToolGetRuntimeDependenciesTool.o
`_ZL17__gthread_triggerv' referenced in section `.rodata._ZN20cmBasicUVPipeIStreamIcSt11char_traitsIcEED1Ev.cst4' of cmBinUtilsWindowsPEDumpbinGetRuntimeDependenciesTool.o: defined in discarded section `.text._ZL17__gthread_triggerv[_ZN20cmBasicUVPipeIStreamIcSt11char_traitsIcEED1Ev]' of cmBinUtilsWindowsPEDumpbinGetRuntimeDependenciesTool.o
`_ZL17__gthread_triggerv' referenced in section `.rodata._ZN20cmBasicUVPipeIStreamIcSt11char_traitsIcEED1Ev.cst4' of cmBinUtilsWindowsPEObjdumpGetRuntimeDependenciesTool.o: defined in discarded section `.text._ZL17__gthread_triggerv[_ZN20cmBasicUVPipeIStreamIcSt11char_traitsIcEED1Ev]' of cmBinUtilsWindowsPEObjdumpGetRuntimeDependenciesTool.o
`_ZL17__gthread_triggerv' referenced in section `.rodata._ZNSt16_Sp_counted_baseILN9__gnu_cxx12_Lock_policyE1EE10_M_releaseEv.cst4' of cmConditionEvaluator.o: defined in discarded section `.text._ZL17__gthread_triggerv[_ZNSt16_Sp_counted_baseILN9__gnu_cxx12_Lock_policyE1EE10_M_releaseEv]' of cmConditionEvaluator.o
`_ZL17__gthread_triggerv' referenced in section `.rodata._ZNSt16_Sp_counted_baseILN9__gnu_cxx12_Lock_policyE1EE10_M_releaseEv.cst4' of cmExecuteProcessCommand.o: defined in discarded section `.text._ZL17__gthread_triggerv[_ZNSt16_Sp_counted_baseILN9__gnu_cxx12_Lock_policyE1EE10_M_releaseEv]' of cmExecuteProcessCommand.o
`_ZL17__gthread_triggerv' referenced in section `.rodata._ZNSt16_Sp_counted_baseILN9__gnu_cxx12_Lock_policyE1EE10_M_releaseEv.cst4' of cmFindPackageCommand.o: defined in discarded section `.text._ZL17__gthread_triggerv[_ZNSt16_Sp_counted_baseILN9__gnu_cxx12_Lock_policyE1EE10_M_releaseEv]' of cmFindPackageCommand.o
`_ZL17__gthread_triggerv' referenced in section `.rodata._ZN17cmFunctionBlockerD2Ev.cst4' of cmForEachCommand.o: defined in discarded section `.text._ZL17__gthread_triggerv[_ZN17cmFunctionBlockerD5Ev]' of cmForEachCommand.o
`_ZL17__gthread_triggerv' referenced in section `.rodata._ZNSt16_Sp_counted_baseILN9__gnu_cxx12_Lock_policyE1EE10_M_releaseEv.cst4' of cmGeneratorExpressionDAGChecker.o: defined in discarded section `.text._ZL17__gthread_triggerv[_ZNSt16_Sp_counted_baseILN9__gnu_cxx12_Lock_policyE1EE10_M_releaseEv]' of cmGeneratorExpressionDAGChecker.o
`_ZL17__gthread_triggerv' referenced in section `.rodata._ZN20cmBasicUVPipeIStreamIcSt11char_traitsIcEED1Ev.cst4' of cmLDConfigLDConfigTool.o: defined in discarded section `.text._ZL17__gthread_triggerv[_ZN20cmBasicUVPipeIStreamIcSt11char_traitsIcEED1Ev]' of cmLDConfigLDConfigTool.o
`_ZL17__gthread_triggerv' referenced in section `.rodata._ZN20cmBasicUVPipeIStreamIcSt11char_traitsIcEED1Ev.cst4' of cmPlistParser.o: defined in discarded section `.text._ZL17__gthread_triggerv[_ZN20cmBasicUVPipeIStreamIcSt11char_traitsIcEED1Ev]' of cmPlistParser.o
`_ZL17__gthread_triggerv' referenced in section `.rodata._ZNSt16_Sp_counted_baseILN9__gnu_cxx12_Lock_policyE1EE10_M_releaseEv.cst4' of cmake.o: defined in discarded section `.text._ZL17__gthread_triggerv[_ZNSt16_Sp_counted_baseILN9__gnu_cxx12_Lock_policyE1EE10_M_releaseEv]' of cmake.o
`_ZL17__gthread_triggerv' referenced in section `.rodata._ZN20cmBasicUVPipeIStreamIcSt11char_traitsIcEED1Ev.cst4' of cmcmd.o: defined in discarded section `.text._ZL17__gthread_triggerv[_ZN20cmBasicUVPipeIStreamIcSt11char_traitsIcEED1Ev]' of cmcmd.o
make: *** [Makefile:2: cmake] Error 1
It actually compiles everything just fine, it seems like right when it gets to creating "cmake" the binary it fails like this.
I see some other posts on here talking about changing the order in the linker section ?
I've posted to NetBSD mailing lists, as well as the CMake Discourse forum and no replies yet.
Any ideas would be greatly appreciated.