Will Mechaenetia support 32-bit architectures?

I think 32-bit architectures have hard limitations on critical resources like thread count, file size, and addressable virtual memory.

32 bits have 4,294,967,296 permutations which, if addressing individual bytes, is 4GB.

It may not be realistic for a game of this scope to support that, but I’m not an expert on this subject.

2 Likes

Rust can compile to 32 Bit just fine, even if I use stuff like 128 Bit Integers, or similar.

And 4 Billion Entities that can be loaded at the same time is quite a lot. Minecraft doesn’t get above 1000 usually. The 2 GB of RAM limit might be an Issue with 32 Bit Stuffs, but I don’t think I will waste THAT much RAM on unimportant Stuff, things can be stored in Files just fine usually.

The Game is supposed to run on a Raspberry Pi 3B and a Raspberry Pi 4 (with 4GB RAM), so this should cover both 32 and 64 Bit ARM, and the Computer I use right now is 64 Bit Intel, and I also happen to own a 32 Bit Netbook that I cant manage to install Linux on for some reason (its a shitty Intel Mobile Processor from a Samsung NC10).

Lots of OS Compatibility is planned, except for Apple, fuck Apple, let those people compile it for their own Apple OS if they really want to. Unless the ARM Version happens to work for them with their new Processors, then “eh, but dont expect me to help much”.

1 Like

That’s not the limit on entities, it’s a limit on addressable memory. You would have a maximum of four-point-two billion bytes of addressable memory.

Let’s say you want to load some voxel data… you previously mentioned that you will be using an octree where each node stores eight pointers to children plus information for color and each leaf stores a reference to block data. Let’s say that is 36 bytes per node (eight 32-bit pointers plus 32 extra bits to hold either the block reference or color information = 9x32 bits = 9x4 bytes = 36 bytes).

Now you want to render 500m^3 of the world. Unless you want your rendering routine to choke, that data needs to be mapped to memory, right? Well each voxel is 0.25m^3. You would need 2000^3 = 8 billion voxels just to cover the leaf nodes.

(The nearest size that actually fits an octree would be one with twelve levels, i.e. 9,817,068,105 nodes total.)

Details
At what depth level will an octree first add at least 8,000,000,000 nodes?
How many nodes will an octree have by that depth?
1. 1 node
2. 1 + 8 = 9 nodes
3. 9 + 64 = 73 nodes
4. 73 + 512 = 585 nodes
5. 585 + 4,096 = 4,681 nodes
6. 4,681 + 32,768 = 37,449 nodes
7. 37,449 + 262,144 = 299,593 nodes
8. 299,593 + 2097152 = 2,396,745 nodes
9. 2,396,745 + 16,777,216 = 19,173,961 nodes
10. 19,173,961 + 134,217,728 = 153,391,689 nodes
11. 153,391,689 + 1,073,741,824 = 1,227,133,513 nodes
12. 1,227,133,513 + 8,589,934,592 = 9,817,068,105 nodes
An octree will first add at least 8,000,000,000 nodes at depth level 12.
By then the octree will contain 9,817,068,105 nodes.

4 gigabytes is only enough room for 119,304,647 voxels at 36 bytes per voxel, and that’s assuming the entire virtual address space is mapped to voxel data. That’s about enough memory to map 128m^3 worth of voxel data.

I may be wrong here, but it is my understanding that the 4GB is a limitation on virtual memory, not physical memory. So even memory-mapped I/O is subject to the limitation. You can still do normal file I/O beyond 4GB but I believe that is considerably slower. I think your render routine would choke if you tried to load the data from disk in realtime.

2 Likes

You assume that Leaves with 8 identical contents (such as Air, or the Stone underground) wouldn’t be optimized to one single Leaf instead of split into tinier and tinier Leaves. And I am sure there will be easy ways to optimize Colors of those Leaves for Rendering too, not to mention the World doesn’t keep all the Blocks loaded, it only keeps the visible Blocks loaded, and the Blocks that Entities are actively changing or asking for.

With the Colors I should specify that they are only going to be sent to the GPU for fast rendering.

1 Like