20. How can we use logical operations?#

20.1. Why do we need to think about bitwise operations?#

Understanding them is prereq to what we will see today and that will help you understand hardware overall.

You of course will not need every single thing we teach you in every single class.

  • Seeing topics once at least is the only way you can make an informed decision to study a topic deeper or not.

  • Seeing a topic in more detail than you will use all the time actually helps you build intuition, or deep understanding, of the topic overall, and help you remember what you need to remember

20.2. Bitwise operators review#

  • & : and

  • | : or

  • ^ : xor

  • ~ : not

  • : shift right

  • <<: shift left

Let’s review truth tables for and, or, and xor.

Table 20.1 AND#

a

b

AND (a&b)

0

0

0

0

1

0

1

0

0

1

1

1

Table 20.2 OR#

a

b

OR (a|b)

0

0

0

0

1

1

1

0

1

1

1

1

Table 20.3 XOR#

a

b

XOR (a^b)

0

0

0

0

1

1

1

0

1

1

1

0

In order to implement more complex calculations, using gates, we can use these tables as building blocks compared to the required output.

There are more gate operations; you can see a simulation for 16 gates

20.3. Adding as an Algorithm#

Let’s review adding binary numbers.

remember, binary is a place-based system like the decimal placed based system you are likely familiar with

How do you add each of the following:

  • 2 + 3

  • 14 + 17

  • 65 +37

To add two or more digit numbers we add the right most place (ones) and if its’ more than 10, we carry the 1 then add 3 numbers (1 and the other two digits). And proceed accordingly moving from right to left.

Since binary is place-based adding with binary follows the same basic algorithm

  • add the two bits in the ones place

  • carry to the next place if >=2

\[ 101 + 100 = 1001 \]

We first add the ones place and get a 1, then the two’s place and get a zero then the 4’s place and get 0 with a carried one.

\[ 010 + 011 = 101 \]

In this case in the ones place we add 0 + 1 to get one, the two ones add to 0 with carry then 1 + 0 + 0 gives another 1.

20.4. Adding as logic gates#

let’s make a truth table for adding two bits.

Table 20.4 Add#

a

b

out 2’s

out 1’s

0

0

0

0

0

1

0

1

1

0

0

1

1

1

1

0

Now, what gate can we use to get the output 1’s place bit and what gate can we use to get the output 2’s place bit by comparing to the truth tables above.

It turns out the one’s place is an xor gate, and the two’s place is an and gate.

This makes up the half adder, try one out at this simulator.

20.5. Implementing the carry#

So this lets us as two bits, but what about adding a number with more bits?

We can put multiple together, but there’s one more wrinkle: the carry.

That’s what makes a full adder different. It adds three single bits, or a carry and two bits and outputs the result as a sum bit and a carry bit.

20.6. Adding more bits#

Then we can link many of those together to get an 8 bit ripple adder.

Alternatively, we can “lookahead” with the carry bit, passing it forward multiple places all at once, as shown in this 4 bit carry lookahead adder.

20.7. Why do this?#

Workign through this example also reinforces not only the facts of how the binary works so that you can understand how a computer works. Working with these abstractions to break down higher level operations into components like this (addition is a more complex operation than and or xor) help you see how this can be done.

Sometimes you have low level problems like resource constraints and bitwise operations can be useful.

We may not do this all the time, but when we need it, we need it.

for example, consdier how to swap two values.

Assume we have two variables a and b intitialized like:

a = 4
b=3
a,b
(4, 3)

We could swap them like

tmp = a
a = b
b=a

then we can see the values to see that they are swapped

a,b
(3, 3)

Let’s reset them

a =4
b=3
a,b
(4, 3)

With bitwise operations we can swap them without a 3rd variable.

a = a^b
b= a^b
a=a^b

then we can see the values to see that they are swapped

a,b
(3, 4)

If we implement this with only 3 bits we have a 1,2,4 places.

4 and 3 rpresented in binary in 2 registers labeled a and b

Then we xor each bit and sotre the result in the first register (our a variable)

4^3 and 3 represented in binary in 2 registers labeled a and b

and next we xor again and store in b

4^3 and 4 represented in binary in 2 registers labeled a and b

now the 4 has moved form A to B. If we xor one more time and store that in A,

3 and 4 represented in binary in 2 registers labeled a and b

20.8. Prepare for Next Class#

  1. In fractionalbinary.md use 8 bits to represent the following numbers by using 4 bits as usual (8,2,4,1) and the other 4 bits are 1/2, 1/4, 1/8, 1/16th:

  • 3.75

  • 7.5

  • 11.625

  • 5.1875

  1. Add to your file some notes about the limitations of representing non integer values this way. How much would using more bits help with, what limitations are not resolved by adding more bits. For example, how could you represent .1?

20.9. Review today’s class#

  1. While we saw many types of gates today, but we actually could get all of the operations needed using only NAND gates. Work out how to use NAND gates to implement and and or in nandandor.md

  2. Add bitwise.md to your kwl and write the bitwise operations required for the following transformations (replace the (_) with a bitwise operator (&, |, ^, >>, <<)):

    4 (_) 128
    12493 (_) -12494
    127 (_) 15
    7 (_) 56
    4 (_) -5
    45 (_) 37 = 37
    45 (_) 37 = 45
    3 (_) 5 = 7
    6 (_) 8 = 0
    10 (_) 5 = 15
    

20.10. More Practice#

  1. While we saw many types of gates today, but we actually could get all of the operations needed using only NAND gates. Work out how to use NAND gates to implement a half adder and describe it in nandhalf.md

  2. In addertypes.md compare ripple adders and lookahead adders.

    1. Give a synopsis of each adder type
    1. Compare them in terms of time (assume that each gate requires one unite of time)
    1. Compare them in terms of space/cost by counting the total number of gates required.
    
  3. Add bitwise.md to your kwl and write the bitwise operations required for the following transformations (replace the (_) with a bitwise operator (&, |, ^, >>, <<, ~)):

    4 (_) 128
    12493 (_) -12494
    127 (_) 15
    7 (_) 56
    4 (_) -5
    45 (_) 37 = 37
    45 (_) 37 = 45
    3 (_) 5 = 7
    6 (_) 8 = 0
    10 (_) 5 = 15
    

20.11. Experience Report Evidence#

Nothing separate

20.12. Questions After Today’s Class#

20.12.1. How often will we be using the emulator moving foward?#

We will use the logic emulator a few more times.

20.12.2. What other bitwise operations are there that do more efficient things like swap the values of two variables?#

This is a good expore badge idea

20.12.3. How much of a disadvantage (performace wise) is using the gates that are changed to fit economies of scale vs the simplifies versions?#

Calculating this out is a good way to extend the practice badge today into an explore badge.

Consider the time to pass through each gate as constant, but compare how the number of total gates changes.

20.12.4. Could you do any mathematical operation with bitwise operations like square root, cubed root, division, etc?#

We can break anything down into bitwise operations. Some are more complicated than others, but all the computer has is bitwise operations, so everything can be done with those.

20.12.5. How would we implement this in code?#

This is how the computer implements your code as electrical activity.

If the question is only about the XOR swap algorithm, I put real python code in the notes.

20.12.6. Are the curcuits and gates the things inside a processor?#

Yes the processor contains circuits made up of gates.

20.12.7. Is it common to use tricks like the one above in embedded systems?#

Yes

20.12.8. Why is this so confusing and how can I understand how to do these operations better?#

More practice and maybe office hours?

It is confusing because it is new.

20.13. Will this be extremely useful if we started working with computers at its very foundation? (Physics, Discrete Math etc.)#

Yes! That is the type of cases where bitwise operations are the level that you reason at.