Yes, although I still don't see the reason for the "complex" part here.If P96/Graphics AllocBitMap() functions returned VRAM resident allocations that were already texture-friendly aligned for the target GPU, *none* of this complex allocation strategy would be necessary
So it may be that all your realign fun is useless because at the end what you got was normal RAM?Allocating a BitMap doesn't even guarantee that the area you allocate is even in VRAM, let alone aligned appropriately. If you just allocate a BitMap with these functions, you'll get a buffer in system memory that is copied to a VRAM allocation if/when P96 deems it appropriate to do so.
Efficient maybe, but something in this card-house is obviously flawed.So, a VRAM allocator that sat "on top" of all this was written and it is generally quite efficient. It allocates as few BitMaps as possible and re-uses memory within them for many textures.
Which should be no problem, that's its jobIn your case, allocating large (1024x1024 and higher) textures is *always* going to push the allocator into requesting a new BitMap.
So do you have control over that or not? From your sentence above it sounded like it's a matter of luck: sometimes your mem is in VRAM, sometimes not.1) The allocated BitMap is in VRAM
So what's the concrete limits here?2) The allocated BitMap is not excessively rectangular (in order not to upset the graphics subsystem which has width/height restrictions too)
Yes. Which means: a few extra bytes.3) The allocated bitmap's total linear size is *at least* as large as the requested allocation plus whatever padding is required for texture alignment requirements which P96/graphics generally know nothing about.
Any why not add such a function?Ideally, there should be some "allocate VRAM aligned to my exact requirements " function exposed by the graphics sub system that W3D could use, but instead we have BitMaps.
Yes, yes. If you have to realign such a 2048x2048 texture and if the only way to allocated RAM is by using a bitmap-function which has a limit at 2048x2048, well...When allocating your 2048x2048x32 bit texture I expect that it should fail almost always, if not actually always because the padding alone will probably cause BitMap dimensions larger than 2048x2048.
Then you could try to request buffers from it multiple times until you get a well-aligned buffer Hm, actually: this doesn't sound much worse than other strategys...
But how do you explain 2048x1024 / 1024x2048 failures? As being said: those fail in pretty much exactly the same way. Your explanations don't cover those. From what I see I'd guess the resize-algorithm is flawed, at least that could explain it. What's your thought on that? Where will you start digging?