Coding decimel

  • MachineMan
    28th March Member 1 Permalink

    I've been trying to make a computer for a while and after figuring out how to make a DEC->BIN converter and a BIN->DEC converter from the Complete electronics tutorial (which by the way really needs to be finished) in the tpt wiki, I've really hit a wall. A computer must be able to work with at least 1 byte of information, but those designs for radix converters are big and bulky; building them to work with 1 byte is too hard.  Luckily, I've learned about BCD (binary-coded decimal).  It's much easier to work with BCD then to assign 255 individual outputs for a BIN->DEC converter.  But I can't find any BCD->BIN or BIN->BCD converters that aren't subframe (subframe is to hard to study and reverse engineer; I'm a noob).  What do I do?

     

    NOTE:  I realized too late that I misspelled decimal in the thread title.

    Edited once by MachineMan. Last: 28th March
  • ArolaunTech
    1st April Member 0 Permalink

    If you want small electronics, as you have observed, subframe is pretty much your only option.

     

    If you want to learn subframe I have these tips:

    1. Make stuff using subframe. Anything will help, no matter how simple.

    2. Learn how elements work. Some of the most important elements for subframe electronics are ARAY, CONV, CRAY, DRAY, DTEC, FILT, LDTC, and LSNS.

    3. Learn particle order. TPT simulates elements from top to bottom, left to right. Make sure to save your creation, exit the save, and reopen the save before testing it (makes sure particle ids are correct).

    4. Carefully analyze how existing subframe creations work so you can learn from them.

  • MachineMan
    2nd April Member 0 Permalink

    I'm no stranger to subframe; I invented the CRAY wire and CRAY logic; much more compact than DRAY wire or DRAY logic:

    But even if you have experience subframing and even with the subframe chipmaker, it's still hard to reverse engineer complex subframe tech.  I barely managed to reverse engineer a subframe XOR gate; A device as complicated as a BIN->BCD or BCD->BIN is out of the question.  The problem with the DEC->BIN and BIN->DEC from the Complete electronics tutorial is that to use 1 byte I would need to assign 225 invdivdual inputs for the DEC->BIN and 255 outputs for the BIN->DEC; both construction processes are tedious--especially so for the BIN->DEC.  With BCD, I can can avoid that problem but I need non-subframe converters so it's easier to understand how converting BCD to binary and back again works.  I'll try to experiment on my own, but it's likely I'll still need help.

    Edited once by MachineMan. Last: 3rd April
  • Jerehmia
    2nd April Member 0 Permalink

    The algoritm to convert binary into BCD is called double dabble and it isn't easy to implement. An alternative would be to use excess-3 encoding instead of binary. Excess-3 is a form of BCD so it's easy to convert into a readable format and you can do binary adfdition and subtraction with it with a small correction. Lots of 8-bit CPUs implemented excess-3 because double dabble is so computationally expensive (the 6502 did for instance).

     

    This save demonstrates subframed excess-3 components:

    Edited once by Jerehmia. Last: 2nd April
  • MachineMan
    4th April Member 0 Permalink

    That design is cool, but large subframe machines are too hard to reverse engineer and I'm better at making simpler subframe tech like wires and logic gates.  Here's the tech I'm using:

     

    1. Radix converters using parallel SPRK data (ID:3096033).

     

    2. SIPO and PISO shift registers that convert serial FILT data into parallel SPRK data and vice-versa (ID:3096036).

     

    3. Binary adder/subtractor that deals with serial FILT data (ID:3092350, the multiplier/divider is still under construction).

     

    As you can see, the radix converters can only deal with a half-byte, because versions that deal with a full byte would be too large to make.  This is my 8-step plan:

     

    Step 1:  Use multiple DEC->BINs (one for each decimal digit) to encode decimal input as BCD.

    Step 2:  Convert BCD into 8-bit binary(parallel SPRK).

    Step 3:  Use PISO to convert binary(parallel SPRK) into binary(serial FILT).

    Step 4:  Repeat steps 1-3 to set second input (I have a way of setting the inputs separately).

    Step 5:  Do math with data from both inputs using adder/subtractor or multiplier/divider.

    Step 6:  Use SIPO to convert the resulting output into binary(parallel SPRK).

    Step 7:  Convert 8-bit binary(parallel)SPRK into BCD.

    Step 8:  Use multiple BIN->DECs (one for each decimal digit) to decode BCD as the decimal output which is then displayed on a LCRY display, or some other means of display I may find while browsing tpt saves.

     

    The decimal and BCD are always parallel SPRK.  With this method, I don't have to assign an indivdual input or output for every number decimal number found in 1 byte of data.  The only two things I need are a BCD->BIN and a BIN->BCD.  Both the DEC->BIN and the BIN->DEC take parallel SPRK input and give parallel SPRK output; likewise, both the BCD->BIN and BIN->BCD I need must do the same.  I found some large non-subframe machines that convert binary to BCD and I'm working on a smaller design and a way to reverse the process, but it'll be hard due to the fact that I can't find any that work similar to radix converters I'm using.

    Edited once by MachineMan. Last: 4th April
  • Synergy
    5th April Member 0 Permalink

    If you look at my 29-bit computer, you'll find an example of BCD > binary and binary > BCD. Neither of which are implemented in subframe. I just used standard FILT / ARRAY technology with the double dabble algorithm.

     

    https://powdertoy.co.uk/Browse/View.html?ID=1761441

     

    Ark made a similar circuit back in the day as well:

     

    https://powdertoy.co.uk/Browse/View.html?ID=1214884 (The long blue/green circuit on the top)

     

    I have even older implementations of this from back in 2011 pre FILT / ARRAY technology which might be slightly easier to reverse engineer: https://powdertoy.co.uk/Browse/View.html?ID=309235 . I believe this was actually the first working implementation of double dabble in TPT. Unfortunately I believe TPT updates have caused it to break over the years.

     

     

     

     

    Edited 7 times by Synergy. Last: 5th April
  • MachineMan
    12th April Member 0 Permalink

    Perfect, https://powdertoy.co.uk/Browse/View.html?ID=1761441 is what I need.  But how do the BCD->BIN and BIN->BCD work?

  • Synergy
    13th April Member 0 Permalink

    @MachineMan (View Post)

     

    https://en.wikipedia.org/wiki/Double_dabble

     

    I implement it in TP using a combination of BRAY / DRAY / ARAY / FILT tech. Each of those little modules represents a decimal digit between 0-9. You feed the binary number into the BIN > BCD converter and it iterates through the double dabble algorithm. The double dabble algorithm itself is simple. You shift the bits of the binary number and then check in each module if the number is 5+. If so, add 3 to the number and shift, if it's <5, do nothing and then shift. Repeat this for the length of the binary number. By the end of the process, every module will contain a number which is 0-9, which represents the decimal conversion.

     

    If you look closely at the FILT in every module, it is arranged like so:

     

    0001 (1)

    0010 (2)

    0011 (3)

    0100 (4)

    1000 (8 which is 5+3)

    1001 (9 which is 6+3)

    1010 (10 which is 7+3)

    1011 (11 which is 8+3)

    1100 (12 which is 9+3)

     

    I would suggest reading the wikipedia article and understanding the algorithm, as it's actually very simple. Practice applying the algorithm with pen and paper until you understand it. And then you should be able to reverse engineer mine relatively easily. All the modules really do is take a group of four bits, do something with the bits (check if it's 5 or greater, and if so add 3), and then shift the bits left. This is repeated for the length of the input binary number. The number of modules corresponds to the max digits in the binary number. In my 29 bit computer the max value is 536,870,911, therefore I have 9 modules for the 9 digits. After every loop, the rightmost bit of the module is fed into the leftmost bit of the module to its left.

     

    Here's a pen and paper example of converting an 8 bit binary number into BCD (decimal). Because the max value of an 8 bit number is 255, we need three modules of 4 bits to represent each digit. You see those modules on the left, and the input binary number on the right. I'll make the input number be 255 (11111111):

     

    0000 0000 0000 11111111 

    0000 0000 0001 1111111 (shift left)

    0000 0000 0011 111111 (shift left)

    0000 0000 0111 11111 (shift left)

    0000 0000 1010 11111 (add 3 to right module)

    0000 0001 0101 1111 (shift left)

    0000 0001 1000 1111 (add 3 to right module)

    0000 0011 0001 111 (shift left)

    0000 0110 0011 11 (shift left)

    0000 1001 0011 11 (add 3 to middle module)

    0001 0010 0111 1 (shift left)

    0001 0010 1010 1 (add 3 to right module)

    0010 0101 0101 (shift left)

    ------------------

    0002 0005 0005 = 255

     

     

    Edited 5 times by Synergy. Last: 13th April
  • MachineMan
    17th April Member 0 Permalink

    I just need to know where the inputs, outputs, and carriers are.  If had one digit from the BCD->BIN and one from the BIN->BCD, I should easily be able to figure out how they work.