Why CentOS Stream and the end of CentOS Linux doesn’t really matter

There were clones of — or more accurately downstream projects based on — Red Hat Enterprise Linux before Red Hat bought CentOS in 2014.

CentOS started in 2006, and there have been other distributions based on RHEL. (My favorite is still Nux‘s Stella from the CentOS 6 days.)

There are many (many!) other Linux distributions and BSD projects that can supply a server or desktop operating system. Most of them are not owned by corporations that can limit their distribution on a whim.

CentOS was a big deal before Red Hat acquired the project in 2014. New projects will now take its place.

Red Hat is right about many things. But the one thing they are very right about is the importance of contribution and community to the health of an open-source software project.

Stream will add those two things — contribution and community — to RHEL. It should just be called RHEL Stream. They could call it Chocolate Chip Cookie Stream for all I care.

But the enterprise, high performance computing, academia, small businesses, startups and individual users have many alternatives. For many uses, CentOS Stream might even be better that downstream-from-RHEL CentOS . But it will be worse for many other things.

Still, the software is open-source. Though Red Hat is a big contributor to the Linux kernel and many other upstream projects, that kernel and most of the rest of the bits in RHEL are not Red Hat’s to withhold. Red Hat knows that. The put it all together, and they do that well. But it’s a collaborative effort in the large.

The reason CentOS and other clones were able to thrive in the first place was because Red Hat made the source of RHEL both available and usable. They go above and beyond. That will probably not change. I say “probably” because releasing a product, say CentOS 8, with a promise — implied or actual — of a certain level of support and then abruptly going back on that promise is a breach of trust.

Red Hat has a lot of community reconciliation to do — for its own benefit, not just that of its downstream communities.

Meanwhile, Debian, Ubuntu, OpenSUSE, FreeBSD and many others stand ready to run your things. In case you’re keeping score, two of these are run and owned by corporations, and two are not.

And by the middle of next year, expect more than a couple new projects looking to be downstream RHEL clones. We know about Rocky Linux and Project Lenix. None of them will be owned by Red Hat. Maybe one or two of them will be able to deliver what users want.

I’m fairly certain Red Hat didn’t think ending CentOS as we know it meant the end of downstream RHEL clones.

But they will cost Red Hat nothing. Except lost business and reputation. They could always gain customers from this move, but I really don’t see it.

And when those downstream projects appear and stick around, angry users around the world will be happier than they are right now. Take that one to the bank.

Discuss this post on Reddit and at Hacker News.

CentOS Stream and the end of the CentOS clone: perils, pitfalls, risks and opportunities for Red Hat

Red Hat unleashed the kraken with its recent announcement that its CentOS 8 clone of Red Hat Enterprise Linux would be shut down in 2021 instead of 2029, to be replaced by the newish CentOS Stream 8.

What is CentOS Stream? It is a reimagining of CentOS as a continuously delivered yet version-constrained development distribution that tracks ahead of Red Hat Enterprise Linux 8 yet stays within the RHEL 8 world from an ABI1-compatibility standpoint.

So instead of RHEL leading to an eventual CentOS build, CentOS Stream leads to an eventual RHEL build.

Stream is also an upstream.

Plus, as many Red Hatters have said, Stream welcomes contributors large (like Facebook) and small (like you and me) in a way that “old” CentOS and RHEL never could.

In the rhetoric around the ending of traditional CentOS, RH employees have been saying the existing CentOS community is made up almost entirely of passive users and no contributors — mostly because it’s impossible to really contribute to CentOS or RHEL outside of filing bug reports. The only avenue, until now, for meaningful contribution to RHEL has been the very upstream and very welcoming Fedora.

Here’s how the “flow” of code used to go:

Fedora >>> RHEL >>> CentOS

and here’s how it will go:

Fedora >>> CentOS Stream >>> RHEL

For those unfamiliar with what the Fedora distribution is all about, it’s free to all users and encourages contributions and involvement. It is where the community — including employees of Red Hat — tests and develops new technologies that may eventually be included in RHEL.

Fedora is friendly and welcoming in a way few free software projects are. It’s a great community and a surprisingly great distro that gives users a years-ahead look at what mainstream Linux systems will eventually be doing.

Fedora releases every six months, and every release receives updates for the duration of the next release plus about a month. That generally works out to 13 months, give or take, but it’s possible to upgrade every six months. And the distro doesn’t stand still. It gets new kernels all the time, and many other components are also updated on a continuous basis.

A new version of RHEL is “cut” from a Fedora release every three years. It promises a certain level of stability and compatibility, has full support for five years, maintenance support for another five years and “extended life cycle support” for another two years after that. Everybody talks about the 10 years — and they do so as if the 10 years were equivalent to the first five years, which is something to think about.

One of the biggest differences between CentOS Stream and RHEL/old CentOS is the support period. RHEL is the only distro that patches for 10 years. By extension, so does CentOS and any other clone that can keep it going.

But CentOS Stream comes with a promise of “only” 5 years of support. That puts it in line with Ubuntu and maybe Debian. It’s not a positive for many. It “takes away” one of the features that made CentOS stand out. Unless things change, after 5 years, you’ll have to upgrade from Stream 8 to Stream 9. I can think of worse things.

I can’t see running any system for more than 5 years, but many can and want to do just that.

Red Hatters have been saying the reason the company created Stream is to encourage users large and small to participate in the development of RHEL by submitting bug fixes — and ostensibly also actual code.

CentOS Stream sounds good to me. For desktop use anyway. There’s no reason a desktop needs to track months behind RHEL. There is little downside — and potentially some upside — to being slightly ahead of it.

For server use, it may be a different story. If you read the thousands of comments out there, the end of CentOS as we know it — with only Stream surviving — is open-source armageddon, a line in the sand, a crossed Rubicon, a breach of trust. You name the cliche, and I promise it has been invoked.

Red Hat is saying that it either can’t or doesn’t want to split its effort between traditional CentOS and CentOS Stream. And as long as old-school CentOS remains on the table, nobody will want to switch to Stream in the first place.

Forcing people to do something they don’t want to do? Sometimes it works out, other times it just makes everyone angry.

The move to kill CentOS as it currently exists has been very unpopular. Very. Unpopular. Users are piling into threads to complain about how Red Hat has used its own Microsoftian form of “embrace, extend, extinguish” to first purchase the formerly independent CentOS Project and hire all its devs, wait while an “official” RHEL clone squeezed out competition like Scientific Linux, and then kill CentOS while keeping the name for what is now CentOS Stream.

Is there any reason why they didn’t call it RHEL Stream? That would be a more appropriate name.

The decision of Red Hat to purchase the clone project CentOS in the first place was curious. Why make a “free” version of your own product when you could just make that product free in the first place?

We all say that free RHEL clones like CentOS and Scientific Linux help promote the RHEL ecosystem and funnel business to Red Hat. Maybe that’s true. But maybe it isn’t. If anybody really knows, it would be Red Hat.

Why the uproar when there are Debian, SUSE and Ubuntu?

There is something about the technical excellence in RHEL that makes it the overwhelming choice in the enterprise market. Plus everyone uses CentOS. Because it’s just like RHEL. That’s it in a nutshell. People want to use it because everybody else is already using it.

I followed the CentOS project before it was acquired by Red Hat. Things could be dicey. Releases were generally VERY far behind RHEL. So were updates. But the ability to “play” with something just like RHEL is very powerful. It’s not just the price of $0. Being able to use a distro for free and without “registering” or otherwise being marketed to are table stakes in the Linux distribution world.

Even Microsoft — which is very much not a Linux company — recognizes the value of “free” in particular situations. If users had to fork out even $10 for Windows 10, and the “price” wasn’t bundled into their hardware purchase, the system’s market share among home, business and enterprise users would plummet. Those who really, really needed Windows would pay, but the system — and Microsoft’s desktop OS business — would bleed users, mindshare and relevance.

(While I’m on this Microsoft tangent, I will point out that the company does offer a Windows Server product and doesn’t appear to give it away for free. Maybe Red Hat is taking notes.)

When Red Hat made its “ending CentOS as we know it” announcement, it said there would be new opportunities to run RHEL for low and no cost. But those plans would be revealed later. It’s not a good idea to pull the rug out from under users — even if they’re paying nothing — and promise them something in the indeterminate future. It doesn’t add to the trust.

Here’s the deal. As I say above, for desktop users, CentOS Stream won’t just be fine, it’ll be better than the “old” CentOS. Stream will unfold comfortably within the confines of a RHEL/CentOS release. It won’t be “slow” Fedora. It will be much slower.

We don’t know how Stream will work on servers. There are so many different server workloads and use cases. For some of them, Stream will be great, for others it will be not so great.

What Red Hat isn’t talking about, but what’s happening as we speak, is the formation of new projects that aim to provide the “old” CentOS/Scientific Linux experience, taking the RHEL source and producing a downstream “clone” of the distribution.

Unless Red Hat goes to extremes to prevent it — and there are no indications that it will, the source code for RHEL will be in good enough shape for the clone projects to do their work.

And CentOS Stream will have competitors in the form of new RHEL clones.

Red Hat won’t control those clones. It won’t have to pay for them, either. We all have to assume that Red Hat is OK with that. I’m 100% sure they didn’t think ending CentOS meant an end to all RHEL downstream clones.

Even if they’re not based on RHEL, there are other Linux distributions capable of securely handling whatever server workload you throw at them. The scientific community, the high performance computing community, academic departments, corporations and companies — everybody could switch to Debian or Ubuntu. Life and computing will go on.

At this point, Red Hat’s move carries a lot of risk with an uncertain reward. Making people unhappy almost never helps your cause.

Linux users, academics and enterprises will survive. They’ll figure this out.

For Red Hat, it’s a gutsy move, and I have no idea how it’s going to go.

1ABI stands for Application Binary Interface I had to look that one up.

Comment on this post on Reddit and Hacker News.

How this Debian Stable user ended up with Google Chrome from Google’s repository

It’s been a long time since I ran the “real” Google Chrome in Linux.

(Update: A new Chromium browser moved into Debian Stable on Jan. 1, 2021. The original post follows.)

I guess I knew that the Chromium web browser — the code from the open-source project that is still coded by Google people but isn’t fully Googled — was very out of date in the Debian Stable repository.

But I didn’t know how bad it was until I started digging. Digging is easy in Debian.

While the rest of the web-browsing world was on version 87, Debian users are stuck with version 83. And that version 83 hasn’t been patched or otherwise updated since July 12. That’s more than six months at this point.

Whenever something’s not right with a Debian package, the place to turn is the bug reports.

There have been problems. Chromium is notoriously hard to package. There are unresolved issues with dependencies. Plus packaging of Chromium for Debian appears to be the responsibility of a one-man army. I’m not even sure if this guy Michael is the maintainer of the package. It doesn’t look like he is. Others are trying to help, but Debian Stable is old, and Chromium needs some other, newer packages in order to run.

It has gotten so bad that Chromium was removed from the Testing repository that is slated to become the next Debian release sometime next year. That could be bad news if the situation isn’t resolved before the “freeze” of Testing in anticipation of Debian 11/Bullseye.

Debian has always done a great job with Firefox. It ships with Debian desktop ISOs, and it is a priority for Debian Developers. Chromium, with its Googly origins, is not.

But many of us need Chromium/Chrome. It’s the IE of the 2020s, and sometimes Firefox doesn’t quite cut it. Like it or not, Google services run better on Chrome. And I use a lot of Google services in my day job. For my non-work browsing, I use Firefox, but when I’m snatching chocolates off the belt and putting them in boxes, it’s all Chromium.

I have Flatpak enabled on my Debian Stable/Buster system, and I thought that might solve my problem. There is a Chromium Flatpak. There is also Ungoogled Chromium, which I just learned exists.

Neither of these Flatpaks would install. The error message said my version of the flatpak package is too old. Debian’s Buster Backports repository has a newer version of flatpak, and I’m already set up for Backports, so I got the new flatpak package.

Chromium and Ungoogled Chromium still wouldn’t install from Flathub.

There were error messages, but I really don’t care about them, so I didn’t write down what they said.

I just need to get this system working with a Chromium web browser that HAS ACTUALLY RECEIVED A SECURITY PATCH IN THE PAST SIX MONTHS.

This is a huge problem.

In many ways, the idea of a never-changing Stable operating system and a constantly changing web browser doesn’t make sense. That’s why Debian ships Firefox ESR (Extended Support Release) instead of “regular” Firefox.

So what did I do about Chromium?

I used the nuclear option. Or in this case the tactical nuclear option. The nuclear option is wiping Debian Stable and installing a Linux distribution that includes an up-to-date Chromium and is committed to keeping it that way.

The tactical nuclear option — for me anyway — is downloading the .deb of Chrome from Google and using dpkg to install it, knowing that the package adds the Google Chrome repository to my sources.list, and now Google is in charge of updating the browser for me.

I’m already logging into Google to sync my bookmarks and other settings, in addition to being logged into Google Apps, so insisting I have Chromium instead of Chrome is nowhere near a hill I need to die on.

I just need a working, secure Chrome/Chromium browser. And now I have one.

I considered moving to Fedora — or even the dreaded CentOS Stream (another topic for another day) — so I could get the up-to-date Chromium that the Fedora Project packages for its own distro as well as the RHEL/CentOS EPEL repo.

(There was a time in the mid-level distant past when Chromium was a problem for Fedora as well. But the distro seems to have it under control now.)

I could also move to Ubuntu, where that distro’s decision to only offer Chromium as a Snap package has been controversial. The decision on Ubuntu’s part was a labor issue. It’s so hard to package Chromium that they can do it once for the Snap and use that on all their Snap-enabled releases.

I could even try Snap in Debian Stable, but how much do I have to do to get a secure version of what for many people is their most important, most used app?

Sometimes things are broken in Debian. Literally it could be a package that doesn’t work. Sometimes it’s philosophical: Is it The Debian Way? That often combines with developer attention or interest — and the lack thereof — to leave a gaping hole in the distro.

For instance, it’s hard to get a working IDE in Debian. That’s why I turned to Flatpak in the first place. It’s the easiest (to me) way to get NetBeans, IntelliJ and Eclipse. Yep, even Eclipse fell out of the Debian repo.

Another think I’ve learned over the past year or so is that while alternative packaging systems like Flathub, Snap and AppImage have their place in the Linux ecosystem, it’s hard to beat traditional Linux packaging (.deb and .rpm).

I have spent a few long periods of time running Debian Stable, just as I have Fedora. Those are the two distros I’ve run the most. They couldn’t be on more opposite ends of the spectrum. Debian stays the same throughout the release, and Fedora changes all the time. You always run into problems in both cases. Not too many problems. I always say there’s about one mid-level problem per year in both distros.

For my time running Debian 10, Chromium is my problem. At least I’ve been able to solve it. You may not agree with using Google’s Chrome repo, but it gets the job done. If I didn’t use any Google services, that would be one thing, but since I use many, this makes the most sense for me right now.

When I finally get around to buying a bigger NVMe SSD drive and reinstalling my operating system, this situation will factor into my choice of Linux distribution. No question about that.

Update: As of Sunday, Dec. 20, 2020, a new Chromium (version 87) is now in Debian Sid/Unstable.

Update: That new Chromium version moved into Stable on Jan. 1, 2021.

Comment on this post at Hacker News or Reddit.

WordPress import is powerful, mysterious ­— what it says (and what I’m saying) about the past, present and future of online expression

I recognize that the title of this post is absurd. I intended to write about WordPress Import and how it makes it so easy to move thousands of posts and images from one WP site to another. But the post took a turn into what we should do about everything we write online.

I should probably split this into two posts. Instead I’ll just ask you to go along for the ride. I’m adding this forward on my phone with the WordPress mobile app, and that’s another piece of the WP ecosystem that benefits both .com (Automattic’s WordPress hosting service) and .org (self-hosted) users alike.

I’ve always been skeptical of how you move content from one WordPress site to another. I’m about to move an entire installation, and I think there should be a lot less mystery and a lot more “here’s how you do it.” Maybe I’m missing something.

I always worry: What if my entries come over to the new site but my images are all on the old one, and I have a thousand posts with links to a site that’s going to go away.

So I did a test. I took the entries from two self-hosted WordPress.org blogs and exported them in the usual WP XML format. I uploaded those two XMLs to this WordPress.com blog, and all the entries, images and comments came along with it. Somehow the Import function was able to grab all those JPGs, stash them in the new system and rewrite the links in the posts. That’s the holy grail of blog migration.

The WordPress documentation should really be shouting from its rooftops about how well this works.

It works so well when moving from a WordPress.org site to WordPress.com. Does it work as well going from a WordPress.org to another WordPress.org, or from a .com to a .org?

Why did I move about 2,000 posts from a couple of .orgs to this .com? For one thing, I can’t believe that those two particular .org blogs (over which I don’t have control) are still live. I had to get the posts — and the image and comments — while I still could. Now I have them on this .com, and I trust WordPress to pretty much keep this site going forever.

My plan was to somehow convert these posts into a format that could be used in a Hugo static-site blog. I could still do that, but then I’d be on the hook for hosting them forever. I can probably do that as long as I’m alive. Who knows what happens after that?

Don’t get me wrong. This isn’t Shakespeare or Plato or anything like that. It’s just my once-daily musings. It’s probably too tech-heavy, and it definitely says something about what’s wrong with me as a person that I focused (and let’s be real, still focus) on tech and not other parts of life. If I were to analyze it, I’d say that tech is a “safe space” for me and my mind, and that’s why I do it and write about it.

That was a little heavy. I’d like to write more like that last paragraph and less like the 2,000 entries surrounding it. Be that as it may. I think it’s important not just for me but for all of us to write and have that writing survive.

The Internet Archive notwithstanding, there’s a huge “here today, gone tomorrow” theme in web-based projects and technologies, and most of us have written (blogged, tweeted, Facebooked, MySpaced, Google Plussed) in so many different places, at the behest of companies large and small. So many things go away. WordPress — in its “dot-com” form at http://wordpress.com — is one of the more consistent players out there. I wish hosting your own (which I have done and will continue to do) were as reliable. The slings and arrows of domain hosts (I’m fighting this battle right now), shared hosting, cloud computing and startup birth, acquisition and death — and our own changing obsessions and attentions — make for a complicated road if keeping our content visible and accessible ­is the goal.

Many of us blog/post/comment in dozens of places. Over the years, so much goes away. My blogPoster project is an attempt to address this. Just the act of mirroring a Twitter (and now Mastodon) post on a self-hosted site is me saying, “I give you this content, huge web service, but I also keep a copy for myself.” We should all be keeping these copies, and it shouldn’t be so hard to do it. Thanks go to WordPress.com and Automattic for being one company that makes it easy.

Reinforcing an Ikea Billy bookcase

After many months in the shed, I pulled out our daughter’s half-height, falling-apart Billy bookcase and did a reinforcement job.

By reinforcement, I mean using a drill and wood screws to secure the back of the bookcase to the shelves, and the sides of the shelves to the frame. Now it is crazy solid and living a new life as a utility shelf in that same shed.

Reassessing WordPress.com

WordPress has been around a long, long time, and the company has maintained its commitment to keeping blogs free and available. Of course they would like it if you paid them for services like a custom domain or more storage, but you don’t have to.

Most geeky types, myself included are deep into static site generators like Hugo, Gatsby and Eleventy. It all began with Jekyll and Octopress, but those have fallen out of favor. There’s a big newness-hotness component to it. Hugo is already old, and Gatsby and Eleventy are hot at the moment.

But if you want to put out a blog with a nice theme, full-featured CMS and user comments managed through the same interface, WordPress is there for you. Even though I help manage comments for more than a dozen sites via Disqus, I prefer to use native WordPress comments when it’s manageable.

I had to switch from Firefox to Chrome in Windows 10, and I’m not happy about it. Conexant’s horrible audio software and its CPU-grabbing Flow.exe is the problem.

I made the switch from Google Chrome to Firefox at least a year ago, and I thought the Mozilla-coded browser’s performance improvements were enough to allow me to eliminate one spy-ish element from my computing life.

Never mind that I use a lot of Google services, especially in my work life. That’s another issue for another day. I just wanted a privacy-focused browser that doesn’t necessarily phone home everything I do to Mountain View.

But on my HP Envy laptop running Windows 10, I’ve noticed the fan running a lot more, and a look at the Task Manager shows Firefox grabbing a lot of CPU, as does a process that only seems to run (and hog 20 percent of CPU) along with it: flow.exe.

I’ve since learned that flow.exe has something to do with the Conexant sound card in my laptop, and it is running all the time when Firefox is running, supposedly because the Conexant software can’t figure out how to configure itself when presented with whatever it is that Firefox is doing. There was a recent Conexant driver update for this laptop, and I think that’s when the problem began.

I could try to figure this out, try to figure out how to revert the Conexant driver, or something like that, or I could just run Chrome, which sips CPU by comparison, and wait for the situation to somehow resolve itself with fixes by Conexant, HP, Microsoft or Mozilla.

In terms of a fix, I won’t hold my breath. I will keep checking Firefox, which I’m still using on my Fedora laptop. That older HP computer doesn’t have Chrome at all.

That said, here is the best post I’ve seen about how to deal with flow.exe.

My Windows 7 PC at the office doesn’t seem to be affected by this Firefox issue at all, but it’s things like this on the Windows laptop that send me running to Chrome.

Update: I’m going to try this solution.

What I did was go to the Services Control Panel (go to Control Panel and then search for Services) and changed the status of CxMonSvc and CxUtilSvc from automatic to manual. Then I rebooted.

What happened: This worked. Sounds works fine, but no flow.exe and CPU usage is normal.

Update on April 14, 2019: The problem is now worse than ever. Nothing seems to stop Flow.exe when Firefox is running. I’m considering removing the Conexant “smart” audio driver altogether because it’s not smart enough not to bring the laptop to its knees even when I’m not playing any sound whatsoever.

Update later on April 14, 2019: I downloaded the latest Conexant High Definition Audio Driver, Version Rev.D, released by HP on March 22, 2019. It is a 220 MB file. That is very, very large for an audio driver. Most FULL Linux distributions with a full complement of software are under 1 GB. And they include the audio driver.

This means I am NOW using Windows’ Add-Remove Programs to remove the Conexant ISST audio driver. We’ll see what happens.

Uninstalling the driver: When I uninstalled the driver, I rebooted and had no sound.

Installing an older version of the Conexant audio driver: I’m trying an older version of the driver offered by HP: Conexant High-Definition (HD) Audio Driver Rev.C.

Did that work? I am not working this week, so I am not torture-testing web browser. But in limited use, so far I’m not detecting Flow.exe issues with the “old” Conexant driver while running Firefox on Windows 10.

Update on May 6, 2019: I can’t believe that I’m still dealing with this issue five months later. My current “solution,” which I’ve seen a few other successfully try is to rename Flow.exe as _Flow.exe so the Conexant software can’t find it at all.

This is where Flow.exe lives in Windows 10: C:\Program Files\CONEXANT\Flow

My HP update software is nagging me to install a new version of the Conexant sound driver, and when I reboot I get this popup:

Windows and HP really want me to reinstall the Conexant “SmartAudio” driver. I renamed C:\Program Files\CONEXANT\Flow\Flow.exe as C:\Program Files\CONEXANT\Flow\_Flow.exe in order to keep the utility from stressing the CPU while running the Firefox browser in Windows 10.

It’s sad that this has gone on for so many months without anybody fixing it. I guess it says something about the number of Windows 10 users running Firefox, and that thing is that it’s a small number.

Vons market: Very disappointing

I happened to be heading east on Victory Boulevard in the Reseda area and needed some dog and cat food, so I stopped at Vons, where Victory meets Tampa Avenue, to see what I could pick up.

I live very close to a Ralphs, and that’s my yardstick for supermarket “quality,” and the little slice of Vons that I saw was not good.

First of all, the selection was poor in comparison. And the prices were crazy high. I’m not talking 10 percent or 20 percent more. For the items I was buying, the prices were a good 30 percent to 40 percent more expensive at Vons then they were at Ralphs.

So either Ralphs is an incredible Shangri-La of bargains and overall bounty, or Vons is really killing its customers, getting killed by its suppliers, or both.