> The strange CPU core layout is causing power problems; Radxa and Minisforum both told me Cix is working on power draw, and enabling features like ASPM. It seems like for stability, and to keep memory access working core to core, with the big.medium.little CPU core layout, Cix wants to keep the chip powered up pretty high. 14 to 17 watts idle is beyond even modern Intel and AMD!
Nice device but the experience write up is more about distro choices than anything. It's quieter than the older units and it's harder to run 2 disk ssd raid because of some design choices. Is it faster? How many virtuals? What's the throughput if you use it for complex network related roles not offloaded to the microtik switching/routing kit?
$599 seems like a lot to me. You can get numerous older, much more powerful Mini PCs (e.g older ThinkCentre Tiny series) or even a base brand new M4 Mac Mini for that kind of money.
Admittedly, the 10G interfaces and fast RAM make up for some of it, but at least for a normal homelab setup, I can't think of an application needing RAM faster than even DDR3, especially at this power level.
A base Mac Mini (256GB/16GB) would cost me €720 while a Minisforum MS-R1 (1TB/32GB) would cost me €559 (minus a 25 euro discount for signing up to their newsletter if you accept that practice).
Price to performance the Apple solution may be better, but the prices aren't similar at all.
Upgrading the Mac to also feature 1TB of storage and 32GB of RAM, the price rises by a whopping €1000 to €1719.
559 vs 720? That's literally like a few coffees. I went to Amsterdam (assuming you're dutch) and I paid 5 euro for a coffee.
Go for the Mac Mini, the hardware incl thermal is also built exceptionally well. That's why you still have 20 year old Mac Minis still running as home servers etc.
Most people don't care about nominal difference in x86 vs arm. They care about cost, performance, efficiency, noise etc. Which applications run on the machine does matter.
The article never explained why the author wanted an ARM setup. I can only consider this a spiritual thing, just like how the author avoids Debian without providing any concrete explanations.
The usual reason to prefer ARM is efficiency, and the author's mention of replacing "power-hungry HPE towers" seeems to support that as a primary motivating factor.
True. But as detailed in the Jeff Geerling article that was shared here in the comments, it has (at least at the moment) a rather high idle power draw, which seems to negate that, especially over time.
True. However, I've always noticed that ARM has less Linux support than x86, and the main benefits ARM is known for are typically performance/watt, running cooler, and less legacy support.
Since this server seems to have pretty average performance/watt and cooling, I can't really see much advantage to ARM here, at least for typical server use cases.
Unless you're doing ARM development, but I feel like a Pi 4/5 is better for basic development.
Linux support for ARM is inferior for end users of desktop 3rd party software. Everything else is provided by the repos. I doubt this person runs Signal or Spotify on those servers.
Funny, I just bought one of these last week. Agree with the article. Mine came with storage and Debian preinstalled. If you buy one from Amazon, keep an eye on price. I bought, then the next day the price dropped $150. Ordered another one and returned the expensive order.
For those who don't need quite that much power I recently added an Orange Pi 5 to my own homelab, the RK3588 SoC packs an impressive punch for what it is
As of this past year (6.15+), most stuff you’d need for a regular desktop is upstreamed. Collabora has been working pretty hard on getting the chip mainlined, so it’s on a very good place compared to something like the Pi 5, which is not at all what the experience used to be in the past!
I was wondering why the PSU is half the size of the compute unit housing. 15 years ago, sure, but today it just seems cheap and lazy on part of whoever designed it.
Caveat: I'm frequently mistaken, always keen to learn and reduce the error between my perception and reality!
>I'm not a hardware engineer, I've failed miserably in software engineering and now run a VPS host.
I’m curious how hard hosting VPS as a business was to get off the ground? I’ve worked 5 years previously as a Linux sysadmin, but am getting pretty bored at my current job (administering Cisco VOIP systems). Think I’d rather go back to that
I have a personal ban on any hardware that isn't powered by USB-C. (Or if it's large I'll accept a C17 socket.) Either give me a GaN or I will get it myself.
Otherwise I'd probably have a few machines from this company.
It's more maintenance due to its frequent release cycles, but it's perfectly good as a server OS. I've used it many times, friends use it.
You can't mess up the release cycle because their package repos drop old releases very quickly, so you're left stranded.
A friend recently converted his Fedora servers to RHEL10 because he has kids now and just doesn't have the time for the release cycle. So RHEL, or Debian, Alma, Rocky, offer a lot more stability and less maintenance requirement for people who have a life.
I don't feel like this really answers the question thought, right? At least not at face value.
I could see the side of maintenance burden being a potential point, meaning that one would be "pushed" to update the system between releases more often than something else.
Typically you want stability and predictability in a server. A platform that has a long support lifecycle is often more attractive than one with a short lifecycle.
If you can stay on v12.x for 10 years versus having to upgrade yearly yo maintain support, that’s ideal. 12.x should always behave the same way with your app where-as every major version upgrade may have breaking changes.
Servers don’t need to change, typically. They’re not chasing those quick updates that we expect on desktops.
Yeah, and that's the take I assumed to hear based on what was said.
However, for something like ARM and the use case this particular device may have, in reality you would _want_ (my opinion) to be on a more rolling release distros to pick up the updates that make your system perform better.
I'd take a similar stance for devices that are built in a homelab for running LLMs.
I think it's highly circumstantial. For example, my personal servers run a lot of FreeBSD and even though I could stay on major releases for a rather long time, I usually upgrade almost as soon as new releases are available.
For servers at work, I tried running Fedora. The idea was that it would be easier to have small, frequent updates rather than large, infrequent updates.
Didn't work. App developers never had enough time to port their stuff to new releases of underpinning software, so we frequently had servers with unsupported OS version.
Gave up and switched to RockyLinux. We're in the process of upgrading the Rocky8-based stuff to Rocky9. Rocky9 was released 2022.
slight tangent, but anyone had experience with running asahi on a m2 MacBook headless? I have a m2 air with a damaged screen id like to repurpose. Mostly want docker containers or something coolify adjacent
With full disk encryption enabled you need a keyboard and display attached at boot to unlock it. You then need to sign in to your account to start services. You can use an IP based KVM but that’s another thing to manage.
If you use Docker, it runs in a vm instead of native.
With a Linux based ARM box you can use full disk encryption, use drop bear to ssh in on boot to unlock disks, native docker, ability to run proxmox etc.
Mac minis/studio have potential to be great low powered home servers but Apple is not going down that route for consumers. I’d be curious if they are using their own silicon and own server oriented distro internally for some things.
"On a Mac with Apple silicon with macOS 26 or later, FileVault can be unlocked over SSH after a restart if Remote Login is turned on and a network connection is available."
Thanks for the reply. I'm looking to replace my aging mini pc with a mac mini, so I'm quite interested in any limitations here.
The full disk encryption I can live without. I'm assuming these limitations don't apply if it's disabled. [Ah, I just saw the other reply that this has now been fixed]
I was aware of the Docker in a VM issue. I haven't tested this out yet, but my expectation is this can be mitigated via https://github.com/apple/container ?
The root of trust for Private Cloud Compute is our compute node: custom-built server hardware that brings the power and security of Apple silicon to the data center, with the same hardware security technologies used in iPhone, including the Secure Enclave and Secure Boot.
Most likely wanting to run Linux natively. Only M1/M2 can fill that role with Asahi, and still not with 100% hardware compatibility.
On the flip side, an M4 mini is cheaper, faster, much smaller (with built in power supply) and much more efficient. Plus for most applications, they can run in a Linux container just as well.
Thanks for the reply Jeff. This aligns with my understanding too. I'm close to purchasing a mac mini to replace my aging media pc. The core feature I want is to run microK8s natively, which I'm assuming the newish Mac containers will support.
> There is also one other perk: while the MSRP is $599, I got it for $559 despite a RAM shortage.
At that price, why not a mac mini running linux? I think (skimming Asahi docs) the only things that would give you trouble don't matter to the headless usecase here?
> The strange CPU core layout is causing power problems; Radxa and Minisforum both told me Cix is working on power draw, and enabling features like ASPM. It seems like for stability, and to keep memory access working core to core, with the big.medium.little CPU core layout, Cix wants to keep the chip powered up pretty high. 14 to 17 watts idle is beyond even modern Intel and AMD!
Does FreeBSD work better?
Admittedly, the 10G interfaces and fast RAM make up for some of it, but at least for a normal homelab setup, I can't think of an application needing RAM faster than even DDR3, especially at this power level.
A base Mac Mini (256GB/16GB) would cost me €720 while a Minisforum MS-R1 (1TB/32GB) would cost me €559 (minus a 25 euro discount for signing up to their newsletter if you accept that practice).
Price to performance the Apple solution may be better, but the prices aren't similar at all.
Upgrading the Mac to also feature 1TB of storage and 32GB of RAM, the price rises by a whopping €1000 to €1719.
Go for the Mac Mini, the hardware incl thermal is also built exceptionally well. That's why you still have 20 year old Mac Minis still running as home servers etc.
The article never explained why the author wanted an ARM setup. I can only consider this a spiritual thing, just like how the author avoids Debian without providing any concrete explanations.
Since this server seems to have pretty average performance/watt and cooling, I can't really see much advantage to ARM here, at least for typical server use cases.
Unless you're doing ARM development, but I feel like a Pi 4/5 is better for basic development.
This is so incredibly inefficient. Multiply by how many times this happens every day...
However I’m not sure of any of the rk3588 vendors that support both UEFI and have a full-size PCIe slot like the MS-R1 has.
Minisforum probably reused the x86 power supply for ARM. The x86 MS-01 and MS-A2 supports GPUs after all.
I'm not a hardware engineer, I've failed miserably in software engineering and now run a VPS host.
Caveat: I'm frequently mistaken, always keen to learn and reduce the error between my perception and reality!
I’m curious how hard hosting VPS as a business was to get off the ground? I’ve worked 5 years previously as a Linux sysadmin, but am getting pretty bored at my current job (administering Cisco VOIP systems). Think I’d rather go back to that
My Beelink Me Mini has an integrated PSU. Actually same with the EQR6 I got too.
Otherwise I'd probably have a few machines from this company.
https://archive.is/rIAVo
Why is Fedora not considered good for a server?
You can't mess up the release cycle because their package repos drop old releases very quickly, so you're left stranded.
A friend recently converted his Fedora servers to RHEL10 because he has kids now and just doesn't have the time for the release cycle. So RHEL, or Debian, Alma, Rocky, offer a lot more stability and less maintenance requirement for people who have a life.
Whereas Debian/Ubuntu have 5 years and RHEL/Alma/Rocky have 10 years.
I could see the side of maintenance burden being a potential point, meaning that one would be "pushed" to update the system between releases more often than something else.
If you can stay on v12.x for 10 years versus having to upgrade yearly yo maintain support, that’s ideal. 12.x should always behave the same way with your app where-as every major version upgrade may have breaking changes.
Servers don’t need to change, typically. They’re not chasing those quick updates that we expect on desktops.
However, for something like ARM and the use case this particular device may have, in reality you would _want_ (my opinion) to be on a more rolling release distros to pick up the updates that make your system perform better.
I'd take a similar stance for devices that are built in a homelab for running LLMs.
For myself I've had nothing but positive experiences running Fedora on my servers.
For servers at work, I tried running Fedora. The idea was that it would be easier to have small, frequent updates rather than large, infrequent updates. Didn't work. App developers never had enough time to port their stuff to new releases of underpinning software, so we frequently had servers with unsupported OS version. Gave up and switched to RockyLinux. We're in the process of upgrading the Rocky8-based stuff to Rocky9. Rocky9 was released 2022.
> I’ve always wanted an ARM server in my homelab. But earlier, I either had to use an underpowered ARM system, or use Asahi...
What is stopping you using Mac with MacOS?
With full disk encryption enabled you need a keyboard and display attached at boot to unlock it. You then need to sign in to your account to start services. You can use an IP based KVM but that’s another thing to manage.
If you use Docker, it runs in a vm instead of native.
With a Linux based ARM box you can use full disk encryption, use drop bear to ssh in on boot to unlock disks, native docker, ability to run proxmox etc.
Mac minis/studio have potential to be great low powered home servers but Apple is not going down that route for consumers. I’d be curious if they are using their own silicon and own server oriented distro internally for some things.
"On a Mac with Apple silicon with macOS 26 or later, FileVault can be unlocked over SSH after a restart if Remote Login is turned on and a network connection is available."
https://support.apple.com/guide/security/managing-filevault-...
The full disk encryption I can live without. I'm assuming these limitations don't apply if it's disabled. [Ah, I just saw the other reply that this has now been fixed]
I was aware of the Docker in a VM issue. I haven't tested this out yet, but my expectation is this can be mitigated via https://github.com/apple/container ?
I appreciate any insights here.
Granted, I don't know if it's really server oriented or if they're a bunch of iPhones on cards plugged into existing servers.
On the flip side, an M4 mini is cheaper, faster, much smaller (with built in power supply) and much more efficient. Plus for most applications, they can run in a Linux container just as well.
At that price, why not a mac mini running linux? I think (skimming Asahi docs) the only things that would give you trouble don't matter to the headless usecase here?