Simulation Resolution

  • lewster
    1st Aug 2013 Member 1 Permalink

    Would it be possible to have an adjustable world resolution when you start The Powder Toy? As in, on boot, you could select, say, a 10 particle by 30 particle world. I think that'd be pretty handy, because half the time when I'm making, say, an electronic component such as a laser, it doesn't take much space, and the rest of the area is not used. 

  • boxmein
    1st Aug 2013 Former Staff 0 Permalink

    @lewster (View Post)
    There have been many discussions on the topic and the general idea is that it's fairly easy and will not set back speed too much. The most issues will be with how much it will work with existing saves.

  • lewster
    1st Aug 2013 Member 0 Permalink

    @boxmein (View Post)

     The existing saves could continue in the current resolution, and new saves can be at a selected resolution. If you wanted to transfer something to a lower res, copy it ( or stamp it) and it'll get pasted in at the same resolution, which in the new resolution, would be bigger or smaller, depending on how big or little the new resolution is as opposed to the previous

  • jacob2
    1st Aug 2013 Member 0 Permalink
    @boxmein (View Post)
    No, XRES and YRES have to be constant, if not the game would get slower. I'm not sure how much slower anymore, noone has tried in a long time

    Also, doing this would just make a large mess in the code, and many things like the simulation would have to be recreated causing lag when resizing.

    Edit: also, making the simulation smaller may save some lag, but so will turning off air simulation. Making the game larger is 0% possible, may not fit and causes tons if unnecessary lag, making it unplayable for some.
    Edited once by jacob2. Last: 1st Aug 2013
  • Catelite
    1st Aug 2013 Former Staff 0 Permalink

    Closest you could get to this would be by adjusting the maximum Zoom size and sticking the zoom window inside of a wall box, really. It'd be silly to modify the entire game around the idea of resolution, especially since empty space isn't calculated. Use Absorb walls around the desired space if you want particles to disappear after escaping.

  • boxmein
    1st Aug 2013 Former Staff 0 Permalink

    @jacob2 (View Post)
    > many things like the simulation would have to be recreated causing lag when resizing.
    What about stopping [simulation and drawing and everything] while it resizes? After resize is complete, then is a good time to recreate the Simulation object. Or rather allowing one to set the screen size before creating a Simulation? Panning around? I mean the possibilities are pretty wide, however much.



    @jacob2 (View Post)
    > No, XRES and YRES have to be constant, if not the game would get slower. I'm not sure how much slower anymore, noone has tried in a long time
    For when walls need to be handled (they're in a separate grid of course), one could simply either snap to four-pixel increments of the game size or slide the lower right row out the drawing/particle area just enough to fit. Setting a minimum size could also be nice.



    @boxmein (View Post)
    Also this here was influenced a lot by http://tpt.io/.240465



    In a perfect world, however.
    Point is, the devs are working for free. Most of the above will be rejected because nobody wants the task of making that happen.

  • jacksonmj
    2nd Aug 2013 Developer 0 Permalink

    @boxmein (View Post)

    Also this here was influenced a lot by http://tpt.io/.240465

     

    Have you read page 2 of that thread?

     

    I believe that allowing a size bigger than the current one is a bad idea, for reasons of speed and still being able to run all saves on less powerful computers and Android devices with not-so-big screens.

     

    As for making the game area smaller in an attempt to make it run faster:

    Actual resizing is a pain in terms of all the changes needed, and for the reasons discussed in page 2 of the linked thread, will make the game slower by a to-be-determined amount. Not a good idea in my opinion.

    Or there's fake resizing, changing only some parts: for example, doing air simulation in just a small area of the window, yet still allocating memory in the quantities required for the default window size. This will probably be more trouble than it's worth, but maybe someone could try it if they have the time.

    Edited 2 times by jacksonmj. Last: 2nd Aug 2013
  • MiningMarsh
    2nd Aug 2013 Member 0 Permalink

    @jacksonmj (View Post)

     The slow downs actually should be able to be eliminated with clever optimizations. Making the XRES and YRES into consts that are set once at simulation starting allows the compiler to usually optimize it down just as well as a real constant value. Constant values have to be loaded into a register just as well as variables. The only optimizations the compiler can make depends on the fact that the value won't change. Therefore, it can make the exact same optimizations for a const value.

     

    So yeah, I can see where all the needed changes would be a pain, but I am not convinced that a good implementation would be slower than static resolution. The only way I could see it being faster is if you make function calls on XRES or YRES, in which case (at least with GCC) the compiler will just remove the function call altogether by precomputing the value. As far as I can remember, this is the only extra optimization a compiler can make on a real constant vs a const.

  • jacksonmj
    2nd Aug 2013 Developer 0 Permalink

    @MiningMarsh (View Post)

    As I understand it, consts (as in declaring a "const int XRES;") must be set at compile time, they cannot be changed when the simulation is started (during runtime). 

     

     

    If you want the technical reason why using a constant may be faster than loading the value from a variable, it's memory accesses.

    If the value is a constant, then the value is embedded in an assembly instruction. Using this value only requires the same accesses to main memory to fetch the instructions as executing any other code would require.

    If the value is a variable, the value has to be loaded from memory, separately to loading the instructions from memory. The additional memory access makes it slower than reading a constant value from an instruction into a register, if the relevant bit of memory isn't in the CPU cache.

    Edited 2 times by jacksonmj. Last: 2nd Aug 2013
  • MiningMarsh
    2nd Aug 2013 Member 0 Permalink

     @jacksonmj!251200

     No, constants can be set at declaration.

     

    const int XRES = something*5; // This is valid. You can make this local and such and it will be set every call.

    C++:

    class Foo {
    private:
    const int XRES;
    public:
    // This will set XRES for the duration of the class's runtime.
    Foo(int initXRES) : XRES(initXRES) {};
    };

     

    So my point was set it once when starting a new simulation and then never changing it.

     

    And yes, I am aware of the memory loading, but my point was that the compiler can often remove most of these memory fetches for const variables since they can assume the value won't ever change.

     

    Interestingly, I wonder if it would actually improve performance in theory, since it would allow the CPU to put the location of the const in L1 data cache. Embedding it in the instructions might not allow it to be put into L1 instruction cache, as from what I hear, powder is not at all L1 instruction cache friendly.