ThomH wrote: ↑Thu Mar 29, 2018 6:09 pm
So classic Bresenham makes a decision every pixel with negligible setup cost. Run slice makes a decision only every line, but incurs a setup cost. In a generic line drawing algorithm you'd probably switch between them based on a heuristic like major axis length is at least 8 pixels, major axis length is at least double minor. I'm asserting that for plotting Wolfenstein/Doom-esque wall edges in a low-geometry world, it's very rare that ordinary Bresenham will be the better choice.
Oki, I would never have guessed you were talking about Bresenham.
And indeed, I have rarely seen cases where classic Bresenham was a good general choice given its enormous per step cost, except for very small lines.
ThomH wrote: ↑Thu Mar 29, 2018 6:09 pm
The only optimisation I can think to offer is that if you're doing sufficiently low-resolution textures (or no textures at all) and have space for RLE versions of every slice at every scale, and are building your list of spans ahead of time, you can do what amounts to a merge sort to write six spans at a time. As in something like:
It took me some time (it is late, I should be sleeping
) to understand what you meant but I think I might grasp what you mean by RLE slices:
essentially, every vertical slice/span is stored as an RLE stream, taking advantage of the fact that close-up (or low-res) textures will repeat vertically for near-constant number of pixels (in that slice)
A merging loop then goes over six of these lists and updates the output byte as a function of the advancement in each RLE list.
That is an interesting idea indeed but I have a feeling that it will require a lot work to minimize the memory reads/writes needed to just update the state of the algorithm. This said, I also feel there might be room for mathematical trickery since the run lengths of each span will be fairly predictable.
Definitely worth exploring.
ThomH wrote: ↑Thu Mar 29, 2018 6:09 pmI'm sure there are algorithmically better ways to find the next event, like a priority queue, but I think that this probably falls under the umbrella of lower complexity algorithms often actually being slower for very small data sets for practical reasons.
Could be, especially if the part "apply change or changes to byte as dictated by even location, increment list pointers" ends up requiring more read/writes than we save.
Hard to say at this stage though.
ThomH wrote: ↑Thu Mar 29, 2018 6:09 pmNekoNoNiaow wrote: ↑Thu Mar 29, 2018 5:31 am
Rendering objects and walls on different scanlines seems like a potential compromise.
I think that's what the Vic Doom is doing?
I just checked and it does not use this technique. It uses a very small window, with constant horizontal resolution and 1 pixel vertical resolution.
I could count up to six colors (the VIC20 can do 16). (For reference:
https://www.youtube.com/watch?v=WFMM3F_-bx0)
But they likely use the "bitmapped" mode of the VIC20 which is actually a multicolor text mode where characters are organized in memory so that the code can treat it as a bitmap.
The Oric could do the same with text mode, this could have the advantage of allowing to invert the X & Y axis, thus reducing the amount of computations needed to draw vertically, especially If all bytes of a character are sequential in memory (I have not checked).
Xeron wrote: ↑Thu Mar 29, 2018 10:22 pm
Our, to avoid fisheye, you just add a constant to all your distances, iirc. Simples.
I actually love the fisheye effect and never understood why Carmack got rid of it.
It compensates for the fact that the player views the world through a very narrow window and kinda recreates the fact that humans have a 180 degrees horizontal field of view.
Combined with reduced resolution at the edges this would actually be quite faithful to how human vision works.