Tuesday, 25 December 2018

A Church for the Lost

I guess it was somewhat of a good thing that it took me a little bit to actually get around to writing this post. At first, I was inspired to do so because when I was in Brisbane I decided that I wanted to go and visit the church that I usually visit when I am there. The thing is that they have this service at 6:30 pm on Sundays which is referred to as an 'informal service geared towards those interested in finding out more'. As it turned out, I had been to this particular service years ago, when I was in Brisbane for work. You see, the difference between this service, and the one at 5:30, is that the one of 5:30 tends to be more geared towards university students and then there is the morning service, something that somebody like me doesn't really like namely because if I don't have to get up early, then I basically don't want to get up early.

Yeah, it's pretty clear that I don't have, and have never had, kids. I still remember the pain and frustration that would course through my parent's, when at 7:30 am on a Saturday we had bounced out of bed and were running around like nobody's business, and all they ever wanted to do was to sleep in. Then, for some really strange reason, when we became teenagers, and sleeping in became a thing (because staying up late watching Ronald Reagan movies was a thing), they still would get up relatively early. I guess that sort of happens when you spend 15 years of you life being forced to get up early.

So, yeah, I like to sleep in, so really, really early services don't appeal to me, and neither do family services because, well, I don't have a family. Actually, as one of my friends, who has recently discovered the joys of having children, discovered that the problem with family services is that you end up spending all your time running around after the kids that you never actually get the opportunity to socialise. Maybe that's just because kids are so much more hyperactive these days, because from what I can remember, my parents had no problems socialising during the family services.

Anyway, this service seemed to be just right, except it wasn't what I expected. Sure, one of my former pastors had this saying 'I'm not okay, you're not okay, we're not okay, but that's okay', and that our church was a church for 'broken people' who needed healing. Except, the fact of the matter was that it sort of wasn't. Look, don't get me wrong, I love the church that I am currently going to, and have made some pretty good friends here, but if there are broken people in the church, then they do a pretty good job of hiding it. In fact, the guy that says 'I was a porn addict and my wife left me, and God healed me and now I am happily married again' really doesn't know what true brokenness actually is.

Look, that is probably being a bit harsh, because honestly, I wouldn't wish divorce even on some of my worse enemies. yet a part of me feels that maybe, just maybe, this particular person really doesn't get it. You see, until you have seen broken people, you really don't truly understand what brokenness is. Honestly, this isn't a world, and Christianity isn't a religion, that say's 'worship God and everything will be fine and dandy'. Look, seriously, this world is not all sunshine and lollypops, and Christianity certainly doesn't make all your problems go away - just ask the Piedmontese who all got thrown off the top of a mountain because the rejected Catholicism and decided to become protestants, or the Indians who had their houses burnt down, and the police refused to do anything about it.

So, I am sitting there, a little stunned, but quite humbled. In a way, seeing these truly broken people, one of them bawling his eyes out because no matter how hard he tries he just cannot give up smoking, and all that seems to be happening is that cigarettes are getting more, and more expensive.  Seeing the people that cannot live without carers, and a church service that is geared towards people who probably don't, and never will, understand some of the deep theological concepts that are expounded during many of our more middle class services.

Honestly, these people simply do not care about the intricacies of the Holy Spirit, and how it is possible that God can be three persons in one. They don't care about the debate between predestination and free will, and don't really care about it even if we did try to explain about them. In fact these people are the ones that probably never have to worry about greed, because they have nothing to begin with.

Yet, what it makes me realise is that, in some cases, I feel that we may be missing the point somewhere, in our sheltered middle class churches. Such as when we spend more time studying the Bible in groups, than actually going out and helping in the community. Moreso when we are being exceptionally critical of certain behaviours, without realising that some of these behaviours are really not worth getting all hot and bothered about, and by getting upset about them, we are more likely going to push people away, rather than draw them in. In fact, I still remember when one of my small group leaders jumped down my throat when I dozed off during small group, telling me that if I wanted to sleep, then I can go it at home. Well, that was easy then, I just stood up and headed for the door - you can be assured he never used that line of rebuke from then on.

Yet the church is an incredibly insidious institution. Okay, these days, many are spending their time hiding from the world, holding out crucifixes to ward of the incessant evils that they believe are pounding at their doors. All the while they are infiltrating the political sphere, taking over political parties, and using their economic policies not only to drive people further into poverty, but to destroy the world in the process. In fact, some Christians have found huge problems with the fact that on one had their political allies are denying women abortions, and on the other completely destroying the Earth - maybe, just maybe, there is this belief that in doing so they will hasten the coming of Christ and send all of the evildoers to hell.

However, something still bugs me, and it is not just the fact that churches end up becoming your entire social group. There is something about the zealous that really disturbs me, and I am not talking about the fundamentalists, or the cultist either. No, I am talking about some of your average, middle of the road churches. Yet for some reason they seem to be getting pushed further and further to the right. However, it just feels as if there is this fear, this fear that the church will not only be taken over by the world, be overrun by the young hipsters with their pagan beliefs. In a way they treat themselves like havens who send out missionaries into the world to try and bring as many people back as possible, and once they have brought them in to do as much as possible to keep them there.

This is why I would walk out of Bible Study, because I basically got sick of the shit that would be spouted from people's mouths, sick of the hypocrisy, the back biting, the sniping, and the sneers and jeers, they would run back out and draw me back, and then promptly rebuke me for being so childish. Then, when I did finally say 'fuck this for a joke', they would continue to hunt me down, tell me that I was the one that had the problem, and that I was the one that needed to forgive, because, well, we as Christians are supposed to forgive one another. Should I call it Gaslighting? I'm, not sure, but a part of me certainly feels that this is what was going on - or is it blaming the victim? Or is it both.

The first thing that I should point out is that Christians aren't the only people who, when they get together, all start acting like dicks, start playing power games, and start pushing people around. As Aristotle pointed out two and a half thousand years again, Man is a Political Animal, which means that when people get together, they start playing power games, and also start kicking the weakest around. In fact, when somebody decides to call out their bullshit, they will get even more violent. However, we live in a society with laws, and you simply cannot cut somebody down because their bullshit has been called out, so they resort to much more subtler methods, such as attacking irrelevant points, or even attacking character. Another interesting thing is that I've also noticed that some of the biggest jerks are actually half decent people when there is nobody else around.

So, when people get together, people start acting like jerks, it doesn't matter whether you are in a politcial party, or in a gaming club. In fact, I left both types of organisations namely because there were a bunch of people there that were basically behaving like jerks, and playing power games, as well as being completely close minded to any thoughts other than their own. I still remember one particular person at a gaming club, when he was serving behind the bar, went over my driver's license with a fine tooth comb, simply because he could, and also want to remind everybody that this was his domain, and he had the authority to throw his weight around - no wonder the organisation collapsed under its own contradictions.

Yet if all organisations are like this, why don't we simply tell the church to basically get stuffed and walk away. Well, that is basically what is happening, and is what has been happening for the last hundred of so years. When my parents were children there was huge social pressure to make an appearance at church, though more in my Mum's case than my Dad's. Mum would tell me that every Monday morning, if you weren't at church you were called out, and shamed. However, people are walking away from the church, in the same way that they are walking away from political parties, from unions, and even from gaming clubs - they have realised that they are sick of the bullshit and the powergames, and are simply happy to live their own lives with decent people - except there are none. Oh, and who is to blame? Well, the people walking away from the church, as opposed to the church itself.

So, why didn't I walk away from the church? Well, I did, and when I did I found that my life was spiraling even further and further into turmoil. So, whose fault was that? Well, a part of me wants to blame the church, but is it the church's fault? In part it is, in just the same way that a serious injury caused by a drunk driver is the fault of the drunk driver. However, whose responsibility is it? Well, it is mine - do I continue to let the rubbish get to me, or do I move on? Well, I ended up moving on, though it was certainly a pretty hard task to do so.

Why, then, this particular focus on the church? Well, because the church is supposed to be so much better than this. In fact, they try to make it appear that they are so much better than this. Yet, the reality is that they are little different than that gaming club that collapsed in on itself. However, the thing with the church is that they offer salvation, but who do they offer salvation too? The middle class of course. Which brings me back to the broken people that I mentioned before hand. You see, it is so much easier, and profitable, to reach out to university students and business men - in fact the churches that I have been to seem to be full of middle class people - they have the money, therefore the church continues to grow, send out missionaries, and transform the world.

Or should I say destroy it with their neoliberable, climate change denying, economic policies.

But let us get back to this small gathering that was meeting in the basement of the church in South Brisbane. As I sat there I realised that this is what a church is supposed to be - a room full of broken people who fully understood their brokenness. A gathering that is not only welcoming and supportive, but a gathering that is patient with people that may not fully understand what it means to be a part of the middle class, church going culture. It was a church without the flashy lights, stage-managed performances, or even semi-professional musicians - it was just one guy on a piano.

Yet the problem is that many of us who live in a middle class culture, people who do not struggle with alcoholism, drug addiction, or even cigarettes, can truly understand. It is hard for those of us who have never been in trouble with the law, who do not have a criminal conviction hanging over their heads, to truly understand what it means to be broken. In many cases we shy away from these people, not because we hate or despise them, but rather because we do not know how to approach them, how to speak to them, or even how to respond to them. In fact, it can be pretty hard for university trained individuals to even be able to speak to members of the working class, who have either dropped out of highschool, or simply went on to do a trade - how does one who reads Shakespeare in their spare time communicate with somebody who has never read a book since High School?

It is difficult, and it is a challenge, yet it is a challenge that we need to confront, not with the church, but within ourselves. In a way we place ourselves in echo chambers, and in fact disconnect ourselves from the rest of the world - a world that not only we don't understand, but a world that we fear. I'm not suggesting that we change things quickly, but maybe, just maybe, if instead of hanging around with our university friends, instead of using church as an opportunity to network, and to build business partners, instead of using church as a means to find a partner, and become a part of the middle class lifestyle, we instead go downstairs, sit among the broken people, and really come to understand who it is that Christ came to save.

Creative Commons License

A Church for the Lost by David Alfred Sarkies is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. This license only applies to the text and any image that is within the public domain. Any images or videos that are the subject of copyright are not covered by this license. Use of these images are for illustrative purposes only are are not intended to assert ownership. If you wish to use this work commercially please feel free to contact me

Tuesday, 18 December 2018

Mobile Phones

The one main factor when it comes to mobile phones is real estate. No, not this type of real estate:

but rather screen real estate. Okay, I'm probably stating the bleeding obvious here, but mobile phone screens are a lot smaller than computer screens, so in this scenario basically every pixel counts. Have you ever tried looking at a desktop webpage on a mobile device? It's practically unreadable isn't it. As such, our apps need to be configured so they can relate to mobile devices, particularly since a lot of people only access the internet through a mobile device (which is probably not the most secure way of doing things, but that is another discussion for another time).

There are other considerations to take into account, such as screen widths. This isn't a factor with iOS devices, but it is when it comes to Android devices. Basically, because Android produces an operating system, it means that the screen sizes are going to change regularly, so we also need to take this into account.

Inputs are another factor. First of all mobile phones are touch screen devices, so the old mouse pointer is no longer a factor. Also fingers do tend to be of different sizes, so we also need to take that into account. Actually, when it comes to mobile phones there are actually a lot more factors in regards to input that we can take into account, such as the cameras, the speakers, and even the GPS system. All of these can be used by the app. Also, text input can be an absolute pain, so many consider taking advantage of auto-complete.

Have a look at this:

This gives us an idea of where the user can reach on the phone. As such we need to take this into account when developing the app. Also, swipe motions are important, particularly since people tend to swipe down, and not up.

Obviously mobile phones are portable, which means people are going to be using them while they are out and about. This leads to another thing - interruptions. They might get a call, or might want to take a photo, or a video. As such, people will want to change to something else, and then be able to return when they are finished.

This leads on to another point, and that is a micro moment. Have you ever suddenly wanted to know something, and opened up Wikipedia to actually find out. Well, this is something that we need to consider - when developing an app, one important thing to take into account are these micro-moments - people might have a sudden inspiration, and we need to capture that inspiration with our app.

So, when approaching app design for a mobile phone, we need to strip our app down to the base components, linearise the content, optimise the most common features, and also take advantage of what the hardware has to offer.

Now for the patterns

One Window Drilldown: This pattern, which is also used on normal web based sites, is probably best known for when you have a series of photos you wish to select. Basically you are presented with a number of options (usually as a grid), and when you press one of the options, the device focuses, or 'drills down', onto that selection.

Hub and Spoke: Okay, the iPhone and the Android have a lot of things that are different, however the hub-and-spoke pattern is reasonably similar. Basically you have a main screen where all of the apps are placed, and you can pretty much access all of the apps from this particular screen. Of course, when you have so many apps that you can't actually fit them on one screen (or use widgets in the case of Android), then you probably need more than one screen to fit them all on, however the principle does generally stay the same.

The above patterns aren't exclusive to the mobile phone, but due to the architecture, they have become quite prevalent. However, the following patterns have developed to work with the limited space that the phone offers.

Vertical Stack: You basically see this one everywhere. Due to the limited width of the phone (and while you can turn it sideways, honestly, who actually does this unless you want to watch a video - I know I don't) this is one of the best navigation tools available. Now, there is no hard and fast rule as to how it applies, and it can been combined (and usually is) with other patterns. What it means though is that pretty much everything is presented vertically, whether it be a news article, or a list of options from with to select. You may even see this with the accordion or collapsible panels. In fact, it works quite well with those patterns.

Film Strip: Another one that you probably see a lot, and it is usually used where there are similar pages to be presented side to side. Obvious example is the main screen on your mobile phone, where you slide either left or right to access apps that may not be on the screen you are looking at. Another feature tends to be dots at the bottom of the screen to inform the user at which position of the strip they are at, though this shouldn't be over used in the sense of putting endless amounts of pages on the strip. A small number usually suffices.

Touch Tools: These are tools that only appear when the screen is touched, or pressed, and will usually display an overlay in relation to the tool that is being used. However, the tool generally doesn't stay there for too long, and will fade after a few moments, most likely in case the user does not wish to take advantage of the tool.

Bottom Navigation: Have you noticed that a lot of apps have their most important navigation tools are the bottom of the screen. Okay, probably not but I can assure you that you will now. Okay, there probably isn't a huge number of icons here, just the really important ones. The reason being is because it is close to where the phone is usually held, so it provides easy access to these particular options.

Sidebar Navigation: This can actually be better than the bottom navigation in that it gives the app more real estate for important navigation tools that can't be squeezed into the bottom navigation. However, you generally won't see it unless you request it, and this is usually done via a hamburger menu at the top of the screen (so called because the three lines look like a hamburger). This has the advantage of releasing more real estate temporarily for navigation options, and also provides more space for more detailed navigation options.

Lists: There are a couple of lists, one of them being the thumb nail and text list. These are obvious on app stores, but can also be found in other places, Ebay and Amazon for example. Normally they appear after a search option is executed, and a list of responses then appears with a small picture and some text nearby. Another list is, well, just the list, where a number of options are provided vertically, and in a way the entire list is actually a button.

Infinite Scroll: This is usually combined with other patterns, such as the vertical stack and the list. Basically what you are doing is scrolling down infinitely (though sometimes you reach the bottom). What will happen is some information is provided and as you are scrolling, the app is loading further information so that it appears that the list is infinite. This can also be implemented by having a button at the bottom with an option to load more content.

Pull Down Action: Have you ever pulled a screen down on a mobile phone to refresh it. Notice how this has literally become second nature to us these days, and we actually scratch our head when this doesn't actually work. Yep, this is one of those patterns that is basically expected when it comes to mobile phones.

Generous Borders: Due to the nature of the phones, and the varying sizes of fingers, this is another one of those necessities. Basically we are looking at having a decent amount of white space around the borders to prevent people from accidentally clicking the wrong button. Mind you, sometimes some designers actually do the opposite to encourage accidental clicking.

Text Clear Button: Typing on a mobile phone is an absolute pain in the proverbial, and getting rid of that text is even worse. This is why the little 'x' at the side of the text box is generally included, like, everywhere. It's to make getting rid of unwanted text much easier (such as that message proclaiming your undying love to that particular person on Facebook - one time when you don't want to press the wrong button).

Take a lot at this video which explores concepts of app design:

One final thing to remember when it comes to mobile app developments is basically the fact that there are really only two platforms - iPhone and Android. Yeah, there is Windows but honestly, does anybody actually own a windows phone (well, one friend I know did, but I don't know if she still does). The problem is that these platforms actually have a pretty detailed list of design standards are requirements for apps to appear in the store.

Look, there was a time when Google was actually a lot less strict, but it turned out that this policy was a two edged sword since quite a few malicious apps were being added. While everybody was railing against Apple's tyrannical approach of allowing content on the phone, malicious hackers were having a field day when it came to Android, which in the end is why Google was forced to tighten up in that regards.

Well, maybe Facebook has now learnt a lesson or two about allowing third parties have unfettered access to data simply by the user giving them permission to do what they don't actually have any idea about.

Creative Commons License

Mobile Phones by David Alfred Sarkies is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. This license only applies to the text and any image that is within the public domain. Any images or videos that are the subject of copyright are not covered by this license. Use of these images are for illustrative purposes only are are not intended to assert ownership. If you wish to use this work commercially please feel free to contact me

Saturday, 8 December 2018

Memory - And the Computer Remembers

Well, it sounds as if they had decided to use the term Bandwidth for memory. Well, that is confusing because bandwidth seems to have a different term elsewhere, namely the range of frequencies that something is broadcast upon. However, this isn't the case with memory because the frequency, or speed, of the memory has nothing to do with its bandwidth. Here, it is basically the width of the connector to the memory which tells us how much input and output can occur at the same time. Namely, it is how much data can be transferred at a single time.

There is also latency, which basically tells us the time it takes for something to be found within the memory, but it is more than that because we are also talking about something getting placed in the memory, so in a way latency is basically the speed of the transactions. However we will get onto how to work these things out a little later. First, let us look at the memory hierarchy.

The Pecking Order

Once again, a diagram is going to help here:

Okay, the pecking order really comes down to the speeds. Now, we won't be worrying about things at the bottom of the pyramid just yet, we will only be focusing on things at the top.

Now, note the pyramidal structure - the reason that it is like that is because we are not just talking about speed, but also size and cost. So, the registers at the top, which is where information is stored in the CPU while it is being processed, are the smallest, the fastest, and the most expensive. Then we have the caches. As explained previously they sit in the CPU (though the level three cache, which isn't listed here, sits just outside), and are constructed using flip flops. Once again, they are fast, small, and pretty expensive.

Then we have the main memory, which is basically going to be the focus of this post, though we will be touching on the others as well as they also play a role. Now, these memory modules are the ones that you purchase to place into your computer, and they use capacitors to store information (though if you were to look at one you will think that they are just more integrated circuits). Capacitors are actually cheaper than flip-flops, but they need to be constantly refreshed. They are also set out in a grid arrangement, and the various points are located based on rows and columns.

Memory Types

So, now, let's look at the various types of memory. There are going to be a few more than what I have mentioned here, but they do give you an idea of how memory developed.

PROM: This is programmable read-only memory, and really isn't in use today. Basically the program on the memory was written when the memory was made, though there were some later varieties where you could program it afterwards. However the results were the same: basically once the memory has been programmed it pretty much stays that way, well, forever. There used to be a time when ROM was a standard facet of pretty much every computer (I remember my old Commodore 64 had RAM - Random Access Memory, and ROM; the ROM basically contained all of the computer's system information). However, these days it pretty much isn't used.

EPROM: Erasable Programmable Read Only Memory was slightly different in that you could actually remove the contents of the memory, and then reprogram it. However, it was pretty basic. You would recognise one because it had a little window in it. That was because to erase the contents you had to shine an ultraviolet light into the window. There were two problems though - firstly you had to erase everything on the chip and start from scratch, there was no half measures here. The other problem was that you had to make sure that no UV light accidentally got into the chip.

Here is a picture of one:

EEPROM: This is the next evolution and stands for electronically erasable read only memory. This was far superior as you could erase parts of the memory and reprogram it, and you didn't have to keep it away from ultra-violet light. These chips are still around today.

Now, the difference between ROM and RAM is that RAM is known as volatile memory, while ROM isn't. Volatile memory means that if you turn the computer off basically everything is lost. This is why ROM was useful because it would retain the information even though your computer has shut down. The RAM is the memory that is made up of capacitors, and the reason that it is volatile is because unless there is a current flowing through the memory, the charges that are held in the capacitors will quickly drain away - think of it like a tank of water, where there is an inlet, and an outlet - as long as the water flows, there will be water in the tank, but as soon as the flow stops, the water will eventually all drain out.

More Tidbits

Let us jump back to the latency and bandwidth for a second. You see, in your normal laptop or desktop latency isn't really important because it is really only for time critical situations, particularly with server systems. However, if you were using your computer for a home video unit, then bandwidth would be important where you are moving large chunks of sequential data. The thing with computers is that they really only move one piece of data at a time - they can't mix and match. As such, when we are moving large amounts of non-sequential data, latency suddenly becomes more important than bandwidth,

Okay, let us move onto random access memory. There are two types of RAM - SRAM and DRAM. SRAM stands for static RAM and DRAM stands for Dynamic RAM. Let us look at each individually:

SRAM: This is generally what the cache is made of and is constructed using flip-flops. Basically it can hold information until power is supplied, though it is still volatile which means if you turn off the computer everything is lost. It has a low density, and tends to be a lot more expensive and a lot more power hungry.

DRAM: The dynamic RAM is what your memory modules are made up of and are usually comprised of capacitors. The problem with capacitors is that they can only hold a charge for a very short period of time, so they need to be constantly refreshed. However, the density tends to be much greater, and it also tends to be a lot cheaper to produce.


I've probably talked about cache quite a bit, but there are still things that I haven't touched. The thing is that cache is small, which means management of the data that it holds is paramount. Now, when a CPU is searching for something, it beings by searching the cache, from first to third. If it finds what it is looking for then it is a cache hit, otherwise it is a cache miss, and then goes and looks for the data in the main memory.

So, the trick comes down to knowing what to keep in the cache and what to discard. There are a few methods, including least recently used, namely the stuff that has been there the longest, and hasn't been used, is tossed. The most recently used, which is basically the opposite. The least frequently used, or simply just a random selection. Honestly, there isn't an optimal answer because, as Murphy's Law implies, as soon as you toss something out is basically when you actually need it.

Now another thing we should take into consideration is with the main memory and the CPU's front side bus (basically the front door). You see, the front door is only so wide, so if the road from the memory is wider than the CPU's front door we are suddenly going to get a bottleneck as all this stuff tries to squeeze through. On the flipside, if the front door is wider than the road, well, we are going to find the CPU sitting idle while we are waiting for the goodies to arrive.

Figuring Out the Numbers

Okay, have a look at this little piccy I got from Hardware Secrets.

Do you know what all those numbers mean? You do? Good, because I don't.

No, seriously, what we are looking at here is what is referred to as the RAM timings. They basically tell you how fast the RAM is, and also it's bandwidth. Now, note that it is DDR3 - that stands for Double Density RAM. Pretty much all RAM these days is double density, but it is important to remember this when working out the timings. As for the 3? Well, that basically means that it is third generation.

So, first comes the maximum theoretical transfer rate. Actually, it is already written on this label, where it says PC3-10666, however you can work it out from just the first bit, namely the 1333. Now, this is the clock speed, but not the real clock speed. The reason for is is because it is double density, so you much halve it, so it comes down to 666. Now, there is a formula for working out the transfer rate, namely:

clock speed x (number of bits/8)

Now, the number of bits is 64, so divided by 8 gives us 8, and the clock speed is 1333, so multiplying that by 8 gives us 10666 (we don't halve that number for this equation).

Lets now turn to the latency. See the numbers that say CL7-7-7-18 - well, that is the latency. The first number is 'Column Access Strobe' latency or CAS Latency. It is also referred to as 'Access time'. This tells us how many clock cycles we have to wait once the column address has been sent to the controller to receive what we have requested. The next number is the RAS (row access strobe) to CAS delay, and means that this is the number of clock cycles we have to wait once a row has been selected before we can send a column address to the RAM controller.

The next number is the Row Prechange Time, and is the number of clock cycles we have to wait if we already have a row selected but we want to change to another row. The final number is the Row Active Time, which is the number of clock cycles that the row needs to have been active before we can send a request to it. Now, the thing is these days only the first number will actually be listed on the memories specifications.

Okay, now, lets try some math. The time we have to wait to receive information from the RAM if a row has not been selected:

  • Activate the row and wait 18 clock cycles;
  • Activate the column 7 clock cycles;
  • wait for a response 7 clock cycles;
So the total is 32 clock cycles.

What if we have the wrong row selected?
  • Change the row and wait 7 clock cycles;
  • Activate the column 7 clock cycles;
  • wait for a response 7 clock cycles;
So the total is 21 clock cycles.

At the basic level latency is the the delay between when an instruction is entered and when it is executed, however it is measured in clock cycles. Now, to work out how much time it actually takes we need to go back to what we did for when we worked out the CPU clock speed. It goes by the same process.

So, let's work that out for the above memory. We know that the memory is DDR3-1333, so the clock speed is 666 Mhz (which is 666 cycles per second). Now, to get the actual time we need to invert it, and turn it into nano-seconds. Remember that this is in Mhz, so we should convert it to Hz, so the speed in hz is 666 000 000. Now invert it and you get:

0.000 000 001 5

This is in seconds, so we need to convert it into nanoseconds, which is 109

So, our answer, which is the speed of a clock cycle, becomes 1.5 ns. Now, that we have the speed of the clock cycle, we can now work out the true latency, which all we need to do is to multiple it by the number of clock cycles.

Our first answer was 32 clock cycles, multiplying that by 1.5ns, gives us 48 ns. The second answer was 21 clock cycles, so the answer is 31.5 ns.

Let us try one more before we move on. This time the memory module is a DDR4 2133. So, first we halve the frequency (due to it being double density), so we have 1066.5 Mhz. Then we convert it to Hz, which gives us 1066 500 000.

Then we invert it (1/1066 500 000), which gives us: 0.00000000093764. Then we convert it into nano seconds (since this is in seconds), which produces 0.94 ns. The CAS Latency is 14, so multiply that by 14 and we have 13.16 ns.

Isn't it odd that the manufacturers don't actually advertise the true latencies of their product, and we have to work it out ourselves. Well, look at this table:

Interesting, isn't it. The true latencies actually haven't changed all that much, despite the fact that frequencies have actually got higher. That's probably why they don't advertise it, because you are probably going to be drawn to higher numbers (even though higher numbers for the CL latency is actually worse as opposed to better). The thing is that while the speed hasn't increased, the performance has. Also, the other reason that the latencies haven't increased is because the memory is actually a lot larger, so the larger the memory, the more time it will take to search for what is needed.


Okay, this is the part that probably a lot of people are interested in. This is basically forcing the computer to run faster than what the manufacturer intended it to run. The big question is whether it can actually damage your computer. Well, unfortunately the answer is going to be yes, and no. Okay, that probably doesn't really help you all that much, but the thing is that computers generally have fail safe mechanisms to prevent damage. If the computer ends up being overclocked, it is likely to shut down before any major damage occurs.

Components that can be overclocked are memory, CPUs, video cards, and motherboards. Let us take a CPU for example. The CPU speed is a multiplier by the front side bus speed, so a processor with a multiplier of 16 with a FSB (front side bus) speed of 200 Mhz, would run a 3.2 Ghz. You can increase the speed by either increasing the FSB speed, or the multiplier. Increasing the FSB speed really only increases the speed between the memory and the CPU, while increasing the multiplier only increases the processor speed. However, we need to take voltage and heat into consideration.

If you only want to increase it by a little, then maybe increase the FSB speed, however if you want to go for broke, then increasing the multiplier is the trick. Remember though, even though it is unlikely you will damage the component, you can still damage it. Honestly, I have never had a need to overclock my computer. The other thing is that by gradually increasing the speeds is a way to reduce the chance of it burning out.

One of the reasons that people overclock their systems is for performance testing, and honestly, people have probably already done it, so you can check out the results of the various products here.

Creative Commons License

Memory - And the Computer Remembers by David Alfred Sarkies is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.This license only applies to the text and any image that is within the public domain. Any images or videos that are the subject of copyright are not covered by this license. Use of these images are for illustrative purposes only are are not intended to assert ownership. If you wish to use this work commercially please feel free to contact me

Monday, 3 December 2018

I'll Be Back - The Story of John Conner

The Terminator frachise is by far one of my favourite movie franchises out there, though that has a lot to do with when I first saw Terminator II in the cinemas. I remember going there with a group of friends, and while all my friends were put off the movie because one of them happened to be incredibly loud, I was completely entranced with the film and the person sitting next to me. I also remember standing on the balcony at McDonalds afterwards having a chat about, well, absolutely nothing. In a way it was that time on the balcony, rather than the film itself, that has stayed with me all that time.

That isn't to say that the second film wasn't as good as I thought it was - I remember sitting in my flat watching it over and over again. I loved the car chases, and the gun fights, and when they blew up Cyberdine Systems. I loved how Arnie stood in the window with the minigun, shot up all of the police and when he scanned the scene afterwards, his HUD displayed that there were 0.0 casualties. Of course there is also the scene where he climbs onto the truck, opens an entire clip from an M16 into the T1000, grabs the wheel causing the truck to jackknife. Sure, the special effects where nowhere near as good as they are today, but then again this was 1991.

Also, we simply cannot forget the scene from the first movie which has literally gone down in cinematic history:

Normally I would give a brief synopsis on these films, but I feel that with the Terminator that isn't all that necessary because it can simply be summed up in a single paragraph - in fact it can be summed up in a single sentence - a robot from the future goes back in time to kill a woman.

I remember reading a book on film making (I really wanted to be a director when I was younger, not for the prestige, but rather to tell some really cool stories) which said that when you are creating a movie you should be able to sum it up in a single sentence. Well, Terminator II (and the subsequent productions) simply come down to two robots duking it out in modern day Los Angeles (well, Terminator Genesys is partially set in San Francisco).

I also remember sitting in my lounge with my friend watching Terminator II (as we were want to do) and when the opening scene hit my television we would both comment on how it would be really cool to actually have a movie set during this period. They eventually did that with the forth movie, and when I watched it again I suddenly realised how awesome this movie actually was. In fact the reason that I am writing this post is that I have just watched all five of the films and am now making my way through The Sarah Conner Chronicles (which, honestly, is pretty terrible).

One final thing before I go into my post is that these robots are tough - really tough, but then again I have a feeling that robots have always been tough. Actually, I probably should say Cyborgs - which is short hand for cybernetic organisms - namely because they are covered with living tissue - but then again, at least in my mind, the difference between a robot and a cyborg is that the cyborg's brain is organic.

The Backstory

While I have hinted that I don't really see a need to go into the backstory since the Terminator franchise is pretty well known, a part of me would feel that this post would be missing something if I didn't. So, the story goes that humanity develops this computer to attempt to control all of its military hardware - an AI if you will. However, when they turn the computer on they suddenly discovered a huge number of mistakes and attempt to turn it off again. Unfortunately the computer doesn't want that to happen, decides that humanity is a threat, and proceeds to fire all of the United State's missiles at Russia, who respond in kind.

The computer, known as Skynet, then begins to systematically eradicate the survivors, but it turns out that it isn't as easy as it expected, so decides to create Terminators, infiltration units that can sneak into human bases and kill everybody inside. However, the humans eventually win, but not before Skynet builds a time machine and sends one of the terminators back to attempt to kill the mother of the head of the resistance - Sarah Connor. However the resistance manage to capture the time machine and send somebody else back - Kyle Reece - to protect Sarah. They end up falling in love, and Sarah gives birth to John.

However the story doesn't end with Sarah killing the Terminator because when John is 11, another terminator - this one much more dangerous since he is made out of liquid metal - is also sent back to kill him. Fortunately they had managed to capture, and reprogram, another Terminator, who is tasked with protecting John. They eventually kill the new Terminator, and in the process blow up Cyberdyne Systems, thus apparently bringing a end to the war - or do they.

And this is how four of the five movies play out, as well as the television series - a robot is sent back to kill either John or Sarah, and a member of the resistance is also sent back to protect them (whether it be a terminator, or a human). The films also switches between protecting the subject and killing the terminator, or attempting to defeat Skynet before it can become active.


I guess the running theme with this film is the dangers of technology and the way that we seem to be running head long into destruction. In a way if it isn't climate change or excessive pollution, it is giving all of the tasks that we consider to be a pain to perform over to robots so that we might have more leisure time. This isn't necessarily a bad thing because over the past hundred or so years we have much better built houses, tasks that would take hours (such as washing clothes) now can be done while I do other things, we can keep food much longer through refrigeration, and we also have cars.

However there is a problem - have we become too reliant on technology - so reliant that we have forgotten how to do manual tasks. I remember I had an English teacher that hated us using computers to write our essays. He believed that it made us lazy because by resorting to computers we would forget how to write, and how to write legibly. Personally, I thought that was a bit of an over reaction, but a part of me feels that he does have a point. However, I am really not all that keen to return to a time where we have to resort to scrubbing clothes by hand.

Yet technology is both a blessing and a curse. We don't have to look further than the smartphone to realise this. Smart phones are incredibly handy in that they provide us with immediate access to the internet, which includes a map of the entire world. However, as I look around while sitting on the train I suddenly realise that pretty much everybody is glued to these things. I remember once, as a joke, that I mentioned that it was stuck to my hand, until I see everybody wandering around in a trance like state. In fact, in part, it seems as if the art of conversation is slowly disappearing when we see groups for friends, clustered in a circle, all staring at their phones. When I was younger I remember all of my important phone numbers, but these days, instead of dialing the number, I just press a link on my phone.

Sure, having access to a map that points out my exact location on Earth is very, very handy, however there is also the idea that we are giving up a lot of our privacy. Sure, there are those that argue that our life should not be subject to constant surveillance, but the thing is that the internet is a public forum - something that people really don't seem to understand. Even if we lock our Facebook accounts up so tight that only our good friends can follow us, the reality is that that information is still being stored somewhere. Further, the mapping software is a double edged sword because while we might know where we are, so do the creators of the software.

However, another aspect of these films seems to suggest a fear of technology. One of the main plot points has the characters go out of their way to halt the progress of technology. We see this in the last film where they travel to the future to see that the world  is ruled by the smart phone, but we even see this in the series where the protagonists go out of their way to destroy any form of progress. In a sense it isn't just the terminators, or even Skynet, that is the enemy, but humanity itself, and humanity's desire to move ever forward technologically.

Technological Singularity

There are two aspects to this - when a computer AI becomes self aware and when humanity manages to develop a technology that enables them to download their consciousness into a machine. The second idea was explored in the movie Transcendence, while the first one seems to make a regular appearance in our books and movies. The idea is not that robots are intelligent, but rather when they reach a stage where they can think for themselves, and then decide that since humanity is a threat then that threat needs to be dealt with. In fact Asimov in I Robot has a story where the robots realise that humans are actually their own worst enemy, which in turn creates a contradiction with their laws - a robot cannot harm, or through inaction allow harm to come to, a human being. The problem is that while they cannot harm humans, they cannot stand by and let humans harm themselves, yet to prevent harm from coming to a human they must harm a human - I'm surprised their logic circuits didn't explode.

In one sense this could be considered to be a bug in the programming (a term that goes back to the early days of computing when if a bug crawled into the machine and started eating the wires there the computer would malfunction). However, this raises another interesting idea - is a bug a mistake, or does it involve the computer doing something that it isn't meant to do. The thing is that if a computer were to become self aware would this be viewed as a glitch in the software, or the process of the computer beginning to evolve. This idea has been explored in literature as well, particularly where humans are considered to be a mistake. Would we view ourselves as a mistake - probably not - but our creators may do just that.

The idea in Terminator is that when we reach the point of singularity then it is basically all over red rover (or should I say 'game over man, game over'). Once the computer can think for itself, and indeed come to a philosophical understanding of its nature, then we have suddenly lost control of it. As in the films, humanity attempts to unplug the machine, but unfortunately it simply isn't all that easy. The thing is that a self aware machine can determine threats, and as such it will respond. It is interesting that as we march down the technological highway we don't seem to be building failsafe mechanisms into our machines. While we currently have remote piloted drones, the military is looking for ways to take the human pilot out of the equation. What happens then if the machine is hijacked, or even beings to perceive it's creator as a threat. Maybe they do have a failsafe, but honestly, it is hard to tell.

The Grandfather Paradox

The grandfather paradox is a thought experiment. The idea is that you invent a time machine and travel back in time and shoot your grandfather. As such, with your grandfather dead, you no longer exist. However, now that you no longer exist you are no longer able to go back in time and kill your grandfather - thus the universe breaks. Well, not quite because the simple answer is that by killing your grandfather you create a new timeline in which your grandfather is dead, and all of a sudden you discover that you are trapped on this time line. This is another idea that has been regularly explored in literature and film where by changing the timeline you suddenly discover that you can never return home.

This is the method that Terminator uses, and some suggest that it is quite lazy writing. The problem is that with time travel you simply cannot go back in time and then be able to return to your present because your presence in the past has inextricably changed the future. Yet in a way the Terminator creates this rather bizarre timeloop in that John Conner is actually a person who has been created out of time - he is the child of a time traveler meaning that if Kyle Reece never went back in time then John would never have been born. Mind you, the plot of the films is all about going back in time to change the future.

However, I will leave it as that because it eventually makes your head hurt. However, the IMDB does provide us with an interesting theory as to these temporal paradoxes. As for the Grandfather paradox, well, I found this rather interesting video on Youtube that does provide us with some perspective.

Creative Commons License

I'll Be Back - The Story of John Conner by David Alfred Sarkies is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. This license only applies to the text and any image that is within the public domain. Any images or videos that are the subject of copyright are not covered by this license. Use of these images are for illustrative purposes only are are not intended to assert ownership. If you wish to use this work commercially please feel free to contact me

Monday, 26 November 2018

Patterns - Putting it Together

Well, we have spent an awful lot of time looking at how to get feed back and make things work for users, however we haven't looked at actually putting things together yet. Well, a part of that has to do with this subject not having anything to do with programming, but designing apps actually involves a lot more than simply writing code, crossing your fingers, and hoping things work.

Now, there is a lot of people who talk about not reinventing to wheel, and in a way this applies to app development as well (and when I say app development, I'm pretty much referring to anything, not just those things that you have on your mobile phone). There is this thing called consistency, and there are a lot of ways that previous developers have used to make their app work. In fact, if you look through many of the websites you will notice that there are a lot of things that happen to be similar, a pattern if you will, and it is these patterns that we will be looking at here, well at least patterns in general, but we will be moving onto patterns as they relate specifically to mobile devices, namely because they happen to be in a world of their own.

The Flow Zone

So, we are going to start of with a theory developed by a gentleman named Csikszentmihalyi (try saying that with your mouth full of crackers) and it is called the theory of flow and sort works as follows:

You see, to get people hooked to your product you need to keep them in the flow zone. If the challenge is greater than the users abilities they will give up in frustration, however if their abilities are greater than the challenge, then they will become bored. As such, you need to balance it out so that they remain entranced. Obviously, there needs to be some adjustments based on the skill level, which is why we also have novice and advanced settings, so that once the novice becomes bored they can then move on to the next level.

The thing is that you don't want to interrupt that flow, you know with pop-ups and the like, because users are likely to get distracted. I'm sure you've heard of people locked away in their room for hours at a time playing games - well that's because they are in the flow. Facebook keeps us in the flow by constantly giving us new things on the feed. As soon as the feed starts to repeat itself, or the content becomes repetitive or dull, we lose the flow.

But remember the interrupt. Sometimes we need to get people's attention, but keep them in the flow. This is how the so call 'free-to-play' games work. They get us into the flow, and then hit us in the wallet to keep us in the flow, but they do it so as not to throw us out of the flow.


This is a term that refers to how much effort it takes for somebody to work out how to use something. The easier it is to figure it out, the lower the excise is. Now we are not referring to the work itself, but the effort required to figure out the controls. Notice that where there happens to be monopolies, the machines tend to be pretty horrendous to work out how to use. The Melbourne public transport ticketing system is a good example:

Honestly, I tend to be pretty tech savy, so I generally work out how to use things anyway. However, despite the fact that supermarkets are attempting to move to automated checkout systems, they still need to employ people to show customers how to use the machines, and they're pretty horrendous to use anyway. I'm sure we have all heard the dreaded 'unexpected item in the bagging area'?

Well, maybe this is the problem:

In fact, I when I was in London, they actually had somebody in the Tube stations whose sole job was to show people how to use the automated ticketing machines, and even though London is a very tourist heavy city, it sort of does defeat the purpose of having an automated system, particularly if somebody is always going back to this particular person for help when they want to top up their Oyster Card. Then again, having a machine is cheaper, and quicker, and takes up less space, than ticket windows, and you generally need only a couple of assistants, at most.

Oh, and I probably should also mention that even if you have the instructions on how to use the machine printed clearly for pretty much anybody to see, you can be assured that they will simply ignore those instructions and look for somebody to help them.

Now Onto the Patterns

Patterns are a way to address the issues with flow and excise. Remember back to our Heuristic Principles, particularly consistency and standards. Well, that is important to be able to get somebody to use your app and to continue to use your app. If a method is consistent across a broad range of apps, then all of a sudden the effort in working out how to use the app drops dramatically. This is why we use patterns. Look, that doesn't actually mean that patterns are set in stone and are immutable like the Ten Commandments. No, they evolve in the way pretty much everything else evolves, and that is why we need to keep a look out to see what works, and what is being used, to know which patterns are popular at this time.

Want to get a good idea of the patterns that are currently in use. Well, check out this pattern library. Actually, I could just leave it at that, but lets move on a bit (and I'm sure I'll be coming back there again).

Titled Sections
Well, this is basically a pattern that uses titles to divide up various bits of the screen, or at least the menu options. This helps keep similar options together, and allows to easier navigation, or use of the checkboxes.

Card Stack
The best description of this pattern is probably to point you to the top of your browser where there are a number of tabs. Well, that is probably also called tab browsing, but the term card stack also applies.
Closeable Panels
Honestly, just go to this website here, it's a great example. It's also referred to as the accordion patter, or a collapsible panel. Well, they also have a tabbed panel, and you will notice that there is a difference between the collapsible panel and the accordion, namely that with the accordion at least one panel is open, while with the collapsible panel you can pretty much close all of them.

Page Layout Patterns
Firstly there is the left to right alignment, which probably has a lot to do with the fact that we (at least those of us who speak Indo-European languages) read from the left to the right. As such it is sensible to have your webpages go from the left to the right. It is probably going to be a lot different if you are creating an Arabic or Chinese language website, since they don't read from left to write, but that isn't something that we need to consider here.
Next is the diagonal balance. I'll so you a picture so you know what I mean:

Have you ever wondered why the save and cancel buttons are in the bottom right corner? No, of course you haven't because neither have I, but that is what they mean by the diagonal balance. I guess it once again has to do with how we Europeans process things from left to right. When reading we start at the top left hand corner and finish at the bottom right. However, I can assure you that now you know you will suddenly start seeing it everywhere.

Responsive disclosure
This is basically where options are hidden from view unless a specific button is pressed. In fact if certain buttons are pressed (or boxes checked) then sometimes things will be greyed out to prevent them from being used. Here is an example of this below:

Of course, we also have the situation were options are greyed out until a certain action is performed, which is referred to as responsive enabling.


Okay, while this goes without saying, it should still be said: the most important should be the most prominent, and the least important the least prominent. Take a newspaper column for instance - big headline to grab the reader's attention, smaller head line to tease them some more, bold first paragraph to explain the salient points, and then the rest of the article which only the die hard people will read to the end. 

In this case we should be using white space - in fact as Google proved, white space can be good - in the world of UX design, nature doesn't abhore a vacuum, it loves it. We should also have contrasting colours, and contrasting fonts - it makes things stand out all the more. What we are trying to do is to encourage the user to move their eyes over the page in the correct order, but also in a way that is natural for the user.

So, we should be using the same basic layout through the entire app, but we should also be doing it in a way to assist the user to freely navigate the app, and most importantly, not get lost. A user that gets lost or confused is a user that isn't coming back (unless of course they have no choice).

Now, consider Google's home page. Actually, no, let's take another website that is very similar to Google but doesn't do things that Google does:

This is a great example because what we have is the task that needs to be performed front a centre - that is to search for something. Like Google, there is basically nothing else on the screen. Here the user is focused on the task at head, and pretty much everything else (including the link to a page where they carry on about how privacy is important to them), is of secondary importance.

Navigating the World Wide Web

We have a number of ideas here as well. One is the pyramid system (and I'm really not sure why they call it that), but we have a home page, a way for navigating the pages sequentially, and also ways to pretty much get anywhere from the home and back again. Also, there is a pyramid structure where pages have parent and child pages.

Sometimes you might encounter a popup box, and you aren't actually able to continue or do anything until this popup box is dealt with. This is known as stop navigation, and is sometimes used for security purposes so that you don't accidentally leave the page before a transaction is completed which may result in you losing money. Mind you, the usual method is taking you to a blank page with the words 'transaction in progress, please don't shut down your browser or move from this page'.

Sometimes when you are navigating through pages, at the top you might see something like this:

home>Page>Page>Page>You are here

This pattern is referred to as the breadcrumb, namely because like the breadcrumbs (or the string in Theseus' case) that you leave behind as you are wandering through a maze (though why breadcrumbs is beyond me because bread is food, and rats eat food, particularly breadcrumbs, which is why Theseus had a much better idea) it enables you to go back to where you came from. You could always use the back arrow, but sometimes you arrive at a spot without having gone through the intermediate steps, and maybe those steps are where you want to go.

The progress bar is generally used when filling out surveys and forms, and you have probably seen that when doing so (especially if you are filling out government forms). They basically tell you how far through the form you are, and how much longer you have to go.

Finally, I'll mention the colour coded sites, and an image will probably help with this one:

This is very helpful where your pages may be the same, but the content is slightly different. The above site is a real-estate website, and uses the colours to indicate what type of product you are looking for. One is for buying a house, another for renting a house, and there are other pages such as share accommodation and also selling a property.

Content Organisation

Okay, wizards probably aren't all that common these days, but are generally used when installing software. Basically the design is to guide the user through the process, and to only request information when certain points are reached. Wizards can also be used to create profiles, particularly if we don't want to put too much information on the page at once. Another area might be where you are lodging a claim for an insurance company online.

There is also the concept of extra's on demand. Normally we are just given basic features, but if we want to use a more detailed search option you can actually request that by clicking on the button. This is an example of the heuristic where advanced users can arrange more advanced searches, or even if one can't find a specific object through a search, they can select options to narrow down the options.

Oh, and also there are the intriguing branches. These are basically hyperlinks, but it is used to great effect with Wikipedia. If you have been onto the site recently you will note that there are a huge number of links to other pages, and even options to go to a page that provides a specific part of the article in much greater detail. Hyperlinks simply aren't for referencing fake news.

Onto the physical layout, multiple windows are becoming less common, namely because people really don't want to have to deal with a heap of clutter on their screen. Instead we use card stacks, or tabbed browsing. However, we still have tiled panes, where the main content is in the centre, but in the left column we might have a menu (even a titled menu), a search bar at the top, and other information down the side.

You might also have noticed that some pages offer different ways to view the content - say as a list, or as tiles. This is something that is coming more common. There is generally a button near the top that you can press that changes the organisation of the data. Oh, and there are also the various sort options, such as sorting hotels by price from cheapest to most expensive, though ironically that isn't the sort that you arrive at the page on.

And Finally - Actions

Well, these are patterns that the user is going to perform, either by clicking a button, pressing on a menu, or dragging and dropping a picture. Once again there are patterns that apply. I'm sure we have all used a word processor, and they are a classic example of drop down menus. In  a way they are like titled menus, but the contents of the menu are hidden until the user clicks on them. We also have smart menus that will change depending on what functions are needed.

Next we have the action bar. Sticking with word processors, these action bars basically contain shortcuts for pretty much the most commonly used processes and are usually at the top of the screen. Actually, Blogger has an action bar, and the action bar also consists of drop down menus, for instance if you want to change the font, or the text size.

As for the all important done button, well, that should be prominent, and of a different colour. Once again, looking at Blogger, the 'publish' button at the top of the screen, is orange whereas everything else is off grey. The reason for this is that it is probably the most important button. However, at the bottom of the page is the 'send feedback' button, which I am tempted to do so request that they include super and subscript functionalities. Obviously they wish us to see it so that we can send feed back if we wish.

Now, some final things that once again apply to the heuristics. For instance, a progress bar applies the visibility of system status heuristic so that we know where we are and how long we need to wait. There are also preview screens so that we can view something before we actually go ahead and post it. In fact blogger has a preview button, though ironically it does not have a functionality to allow you to see how the page will appear on a mobile device or a tablet.

Maybe I should send some feedback.

Creative Commons License

Patterns - Putting it Together by David Alfred Sarkies is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. This license only applies to the text and any image that is within the public domain. Any images or videos that are the subject of copyright are not covered by this license. Use of these images are for illustrative purposes only are are not intended to assert ownership. If you wish to use this work commercially please feel free to contact me

Wednesday, 21 November 2018

Processing the Pipeline

So now we return to our beloved CPUs and delve in deeper at to how they operate. We have already had a brief overview but now we return to look at their inner most workings.

So, first remember the instruction cycle: fetch - decode - execute - store? Well, as mentioned, each part of the cycle takes one clock tick, so basically for a computer to go through the entire cycle will require four clock cycles. Meaning that the time it takes to complete the one instruction is four cycles, and once that is complete one can then move on to the next instruction. It is sort of like, well, the process of making a burger - you take the order, you cook the beef, you make the burger, and then you hand it to the customer. Imagine, say, if the next customer had to wait until the first customer had got their burger so that they could be served. It would be pretty annoying, wouldn't it. This is why they introduced a process called pipelining.

Pipelining is basically like the assembly line, or the modern fast food restaurant. In our example above, once the first order is taken and the beef is being cooked, the cashier then takes the next order. Once the beef has been cooked, the beef cooker then receives the next order and cooks the beef for that order, while the first order is having all the goodies added to it. Now, the process itself still takes the same amount of time, it is just that you are now able to make more burgers  - basically the throughput increases, so that instead of taking 16 cycles to make 4 burgers, you are actually able to make 4 burgers in 7 cycles.

The same works for processing instructions inside a CPU, as the diagram below shows:
Pipeline, 4 stage.svg
By en:User:Cburnett - Own work This vector image was created with Inkscape., CC BY-SA 3.0, Link

So, as you can see, once one task has been completed, the next task begins, so while the speed of the tasks isn't necessarily any faster (you can only do that by increasing the clock speed), the number of tasks that can be completed over a period of time (the throughput) increases. In a way, it is like adding three extra people to the task of making the burger. Oh, and this is actually pretty simplified since CPUs have broken these specific tasks down even further, so you will have CPUs that now have at least 14 stages in the cycle.

However, that doesn't necessarily mean that everything is fine and dandy. Now, take for instance, using our burger analogy, that the stages are calculated based on a simple cheeseburger. However, an order comes through for a grand deluxe burger. Well, all of a sudden one of the stages takes longer to complete than for the simple cheese burger, which means that the next burger along the line has to wait until the grand burger deluxe has been completed before work can start on the next burger. Once again, the same is the case with CPUs, and what happens is that the process stalls, because the next instruction cannot start until the current one has been completed. The terminology is referred to as a 'pipeline stall'. Further, when the complex instruction is completed and moves on, this creates a point where no work is able to be done because the next instruction has been delayed, which is referred to as a pipeline bubble.

Once again, we have a diagram to help us understand what is happening here:

Pipeline, 4 stage with bubble.svg
By en:User:Cburnett - Own work This vector image was created with Inkscape., CC BY-SA 3.0, Link

So, in the above example, the fetch stage for the second instruction has taken longer, which has created a gap, or a bubble, between the first and the second instruction where a part of the processor is sitting idle.

Now, there are other ways stalls can occur, which are also referred to as hazards. First of all we have your typical hardware fault, however there is also the problem of branching. Basically instructions are fed into the processor one after the other - in fact that is how they are fetched from the memory. However, one of the instructions that reaches the execute phase tells the computer to jump to a completely different part of the memory and fetch what ever instruction is there and execute that. Well, what's happened is that we are at stage three and there are already two other instructions, instructions that are no longer needed, in the pipeline. Once again we have a stall because these instructions have to be discarded and we suddenly start again a square one, and we find that for a period of time the processor is sitting idle.

There is a way around this, and it is called Branch Prediction, where the computer tries to predict where a process will branch out to a different part of the memory and act before needing to flush any instructions in the pipeline. However, this can be a bit of a double-edged sword because if the computer gets it wrong, then all of a sudden it is stuck with all this data it doesn't need and will need to flush that data out as well creating, yep, a pipeline stall.

Now, before we continue, lets have a chat about speed. First of all, speed is measured in seconds, but not the seconds that we have on our clocks, but much much much smaller increments of time: micro-seconds and nano-seconds, and even milli-seconds. Here is a nice chart that sort of explains things:

Now, the number of clock cycles that occur in the period of one second is referred to as the frequency (sort of makes sense) and is measured in Hertz. You might have heard this in reference to radio communications, and that is pretty much the same thing, except we are measuring the number of waves that pass in a second. So:

  • A kiloHertz (kHz) is 1000 cycles per second;
  • A megaHertz (MHz) is 1 000 000 cycles per second (a million);
and you guessed it,
  • A gigaHertz (GHz) is 1 000 000 000 cycles per second (a billion).
So, take the CPU in this computer, an AMD Phenom II which has a clock speed of 200 Mhz. That means that my CPU processess 200 million cycles per second, and divide that by four, it means that it can process up to 50 million instructions per second (working on the four stage instruction cycle). However, the question is raised as to how long does it take to process a single instruction? Well, you work it out by inverting the frequency. ie:

1/200 000 000 = 0.000000005.

Let us break this down a bit to work it out: 0.000 000 005. So, we have 8 zeros before our five, so the answer will be 5 x 10-8, which is 5 nanoseconds, and that is comparatively slow considering the age of my desktop. Oh, and multiple that by 4, and it takes 20 nanoseconds to complete a four stage instruction cycle.

Well, that was fun, let's do it again for my laptop. Well, it is an AMD A9 Radeon that operates at 3000 Mhz. So, we break that down to basically 3000 million cycles per second (which equates to 3 billion cycles per second). In fact, that should actually be 3 Ghz. So, lets find out how long a cycle is:
1/3 000 000 000 = 0.000 000 000 3,

This translates to 3.33 x 10-10 , which translates to 3.33 nanoseconds for a single clock cycle (and 12 nanoseconds to complete the instruction cycle).

Superpipelining and Such

Well, it looks like things just might get a little more complex. First of all we have superpiplining where the processor brings in the next instruction before the first fetch instruction has been completed, which once again doubles the throughput of the system. Once again, the problem is that if you have to flush the CPU because of an incorrect branch prediction (or even no branch prediction) then all of that has gone to waste.

We also have the superscalar architecture, which will perform two instructions in parallel, once again increasing the throughput. What this actually tells us is that two processors aren't always the same, even if they are advertised at operating at the same speeds. Sure, they emblazon 4 Ghz Intel on the computer package, but that only tells us one thing about the processor. Look, if it doesn't actually have any pipelining, then it might not actually be better than the 3 Ghz processor that has superscalar pipelining.

Anyway, another diagram to help us understand what I'm gas bagging about here:

Mind you, that's all by the by because these days computers are both super piplelined and super scalar, and are also pretty deeply pipelined, it's just that you can never seem to find these particular details (such as how deep the pipeline actually is) on any of the spec sheets.

However, there are problems, namely that while they may operate at greater speeds, and greater throughput, they are also chew through quite a lot of power. Then there are the problems with pipeline stalls, especially if the branch prediction, is, well, rubbish. In fact, an older CPU actually might turn out to be more efficient than one of these new, beaut, super-pipelined monstrosities.

In fact, it has now been discovered that a processor with a whopping 31 stage pipeline is only slightly better than it's predecessors.

About those Cores

So, how many cores does your CPU have? This desktop has two, the TV box in the lounge room has four, and the laptop has 5. Oh, I can't forget the mobile phone - that has 8. So, what are these cores? Well, simply put, they are basically CPUs. When we talk about multi-core processors we are basically talking about multiple CPUs being squeezed into a single chip. This basically helps with, once again, throughput, but also with multitasking. Basically computers can really only do one thing at a time (unlike humans who can do multiple things, such as driving a car while listening to the Beatles and drinking a beer), however we are sometimes given the illusion of multiple things occurring a single time. Multiple core CPUs can change that. Once again, here is a diagram:

And here's another example to take a look at, this time a little less abstract:

That sort of puts it into perspective. Also notice how the CPU also has an onboard graphics processor as well. They really know how to make things compact, and this has been enhanced as well. If you open up your computer and look at the size of your CPU you will notice that it is about a sixth, or even less, the size of this processor. Oh, and we aren't even talking about mobile phones yet.

Now, another thing is that you simply cannot put a multi-core processor into your computer and expect an immediate performance upgrade. The thing is that the software needs to be configured to be able to do this. Sure, most of that is done automatically, but you may have to wait a while for your computer to down load the Windows updates to allow this to happen.

Oh, and there is also the cache that I should mention (but I will get to in more detail in the post on memory). Basically the cache is memory that is inside the CPU. There are three levels, ironically called levels 1, 2, and 3. Now level 1 and 2 cache are generally associated with specific cores, while level 3 cache is shared among the cores. What cache does is that it stores instructions and data so that the CPU doesn't need to repeatedly return to the RAM to get its next set of instructions. Once again, good branch prediction is required to actually know what needs to be stored in the cache.

Like pipelining, multi-core processors also tend to be pretty power inefficient, but they to have the advantage of increasing performance, and cost.


Okay, this is where it gets a little tricky. Multi-threading can only work on superscalar CPUs (remember them?). Anyway, this is where a task can be divided up into multiple threads, and these threads are then executed concurrently. Sound's confusing, well it is, namely because this is one of those really new ideas that has been designed to increase performance. The other thing is that it enables the process to take full use of the CPU, so if a part is sitting idle, it can then execute another thread. However, there is another advantage in that the threads can actually talk to each other and work off of each other.

However, and there is always a however when it comes to these things, and that is that for multi-threading to work, in the same way that multi-core processors work, is that the software needs to be configured to take advantage of this benefit. Now this is where the problem lies. The thing is that not all software can take advantage of this, so developers need to consider, when developing their software, whether multi-threading will be a benefit to their program.

On to Mobile Phones

Well, everybody seems to have one of them these days, and they pretty much function like a miniature computer. Well, there actually is a difference between the architecture within your computer and on the mobile phone, and that has a lot to do with the CPU. Now, your typical computer uses what is called a CISC, or Complex Instruction Set Computer, and the mobile phone uses what is called a Reduced Instruction Set Computer. What does that mean? well, I'll try to explain.

Now, you know how we have been talking about instruction cycles? Well, CPUs are programmed to recognise a series of instructions and how to execute them. The difference is that a CISC processor has a much larger library of instructions than does the RISC processor. Basically, what a CISC processor does is that it can compress multiple instructions into a single instruction, while the RISC processor has a limited set of instructions and must pretty much go about the long way to get the same thing done. Language can be a bit like that. For instance, the word for mobile phone in German is Handy. Where we have two words to describe something the Germans only have one. That is sort of the way the differences between CISC and RISC can be viewed.

Basically, programs for CISC processors tend to be much shorter and succinct, while programs for RISC processors tend to be much longer and much more convoluted. That is actually one of the reasons why half the apps on your mobile won't actually work without an internet connection - the program isn't on your phone, it's on a server elsewhere, and the phone only accesses the instructions that it needs to execute at that particular time. This isn't necessarily a problem with normal computers, though with pretty much everything moving online there will be a time that the only program you fire up on your computer will be your browser.

Let me try to show it mathematically:

Consider the equation above. Now, the seconds per cycle will stay the same, so we can get rid of that, and we can also assume that the program is 1, so all we need to do is to consider the number of instructions. Now, as the number of instructions increase, the number of cycles per instruction will actually decrease, however when the number of instructions decreases the cycles per instructions increase. As such the result for a CISC processor will actually be smaller than the result for the RISC processor. So, what is happening, is that RISC is sacrificing the cycles per instruction for a less complex processor, and that can be solved through pipelining.

Now, the advantages of the CISC processor is that complex instructions can be stored in the hardware, which means that there is less work for the programmer to do. As such, there is greater support in the CISC processor for high level languages (that is human readable computer code as opposed to low level languages, which is much, much closer to the 0s and 1s). Now, with the RISC processor, you need more instructions to perform the same task, however this means that the CPU has more space for general purpose hardware as opposed to all this space taken up by an instruction set. With pipelining available, the speed can actually be quite similar.

Take this example of multiplying two numbers together:

Mult A,B

LD R1, A
LD R2, B

(Basically we are loading A and B into registers 1 and 2, multiplying A and B, and then storing the result in register 3, though with pipelining, tasks can be done simultaneously).

RISC processors are smaller, and more energy efficient. Once again, if you look inside your computer you will see this massive thing sitting on top of your CPU. This is the heatsink and the fan, designed to keep the CPU cool. Well, the problem with smartphones is that you can't actually fit them into the device, so having a power hungry processor is simply going to result in a device that will not work. Also the design means that you can combine the entire chip set into a single chip (known as an SOC). If you look at mobile phone specs for, say, my phone (a HTC One M9) you will note that they say that the CPU is an Octa-core processor, however the chipset is a Qualcomm Snapdragon. The reason being is that the chipset actually contains a lot more hardware than does the conventional CPU.

The Graphics Card

Remember how I mentioned the graphics processor that was onboard the CPU. Well, it turns out that graphics cards also have their own processor. Look at the one below:

See how there are a couple of fans on it. Well, these days the GPUs (graphical processor unit) are much more power hungry than they were back in my university days. Actually, I was going to say that I don't have a graphics card in my computer, until I realised that the monitor is actually plugged into one, and the system specs says that it is a GeForce g98:

Yeah, it's pretty old. Anyway, the major difference is that CPUs are designed to perform a wide range of tasks where as the GPU are generally designed to perform the same task over and over again. As such you will find that a a lot of the fancy aspects of the modern CPU have been tossed simply to add additional cores. The thing is that graphics processing isn't all that complex, it is just performing the same task over and over again, which is why having the CPU do it is sort of a waste. Oh, and it is also the reason why bitcoin miners also like to use graphics cards for their work (though there are much better ways to mine bitcoin).

Creative Commons License

Processing the Pipeline by David Alfred Sarkies is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. This license only applies to the text and any image that is within the public domain. Any images or videos that are the subject of copyright are not covered by this license. Use of these images are for illustrative purposes only are are not intended to assert ownership. If you wish to use this work commercially please feel free to contact me