Wednesday, 30 January 2019

Basic Cryptography

Well, I have now finished my second semester of computer science so I guess it is time to start writing some posts on the subjects that I did so as to assist me in studying for my exams. Well, the first post will basically be an introduction to computer security, otherwise known as cryptography. This is probably one of the foundations of our modern age because, honestly, without cryptography we probably wouldn't be able to use our smart phones to check our share portfolios, or be able to withdrawal money from ATMs (or whatever you call them where ever you happen to be, but if you don't know what an ATM is, it stands for Automatic Teller Machine, but basically it is one of those machines that you use to withdrawal cash from your bank account).

I probably would also make a comment about being able to share personal information with your friends by using Facebook, but honestly, that company really seems to be having some serious issues with their security, so I guess that would be a pretty bad example.

Anyway, cryptography has been around for a very, very long time. In fact it has been around long before Charles Babbage thought up the idea of an automated calculating machine, and Ada Lovelace came up with a way of programming this hypothetical machine. I would say that it goes back as far as Julius Caesar, but honestly, it goes back even longer than that. The thing is that as long as people have been waging war against each other, people have been devising ways of sending messages in a way that if they were intercepted, they wouldn't be able to read the message.

One of the most famous codes was the German Enigma code, which was used to pass information to the troops during World War II, and was famously cracked by Alan Turing, who has earned the title of being the father of computer science. The story of how Turing cracked the code was told in that rather well known movie called the Imitation Game. Since our lecturer showed us a clip from the movie in the first lecture, I probably should do the same here.


Caesar Cypher


Anyway, enough of movie trailers and let us get on to some cryptography. Basically, we we start with a cypher that you probably heard about, or even played around with, when you were a kid. Basically it is the Caeser Cypher (and is the reason that is name popped up above). I remember when I was a kid our school library had some spy books in there. No spy books in the sense of a Tom Clancy novel, but rather books on how to be a spy, and some tricks that spies use - sort of. One of them was writing coded messages, and one of the ways was through the use of what is known as the Caesar Cypher.

Basically, the Caesar Cypher is where you take the letters of the alphabet, and shift the letters a certain numbers of spaces to the right (or the left). The traditional way was to shift it three spaces right, but that is too obvious, so you can pretty much do it as many times as you like. Below is an example:

The above table is an example of how the cypher works. So, where the letter A appears in the message, you replace it with a D. Where a B appears it is replaced with an E and so on and so forth. So, the following message:

I am a piglet

Will come out as follows:

L dp d sljohw

The problem with this cypher is that it is pretty easy to crack. In fact, I wrote a computer program that is able to crack it pretty easily. In reality, you only have 25 different combinations that you can try, and by moving through each of them you can pretty easily crack the code. In fact, the longest part of cracking the code was actually writing the program to do it, and once the program has been written and executed, it literally takes seconds to decrypt the text.

I probably should mention that cypher texts also have keys, and not surprisingly the key is what is used to decrypt, or unlock, the code. With the Caesar Cypher, the key is the number of spaces that you shift to generate the code.

Frequency Analysis

Now, as mentioned, there are only 26 (or rather 25) possible permutations with regards to the Caesar Cypher. However, what if there isn't any particular order in which the letters are arranged, such as below:

Well, that happens to be a little bit more complicated, particularly since there is somewhere in the vicinity of 288 different combinations. For those who can't do that sum in their head, that is basically 3.1×10²⁶ different combinations that you could have. Basically that is an awful lot of combinations, and to try every single one of them will take you a very, very long time, even with a computer. In fact, if a computer were to try one combination a second, it would take 9.8×10¹⁸ years to try all of those combinations. Honestly, I don't think anybody is going to be living that long.

So, do we have the perfect uncrackable code? Well, no, not quite. Guess what, I wrote a program to crack that one as well, though it is a little more involved than our Caesar Cypher. You see, the problem comes down to our language. Basically there is something known as frequency analysis. Each of the letters in our alphabet have a certain frequency in which they appear. For instance, the most common letter happens to be E, followed by T then A then O then N. This chart should help:

So, the way we crack one of these codes is by counting the number of times each of the letters appear in a passage. We then compare those letters with the chart above. Then we find the most common letters and switch them with the letters above. After about five switches we start to make out words and once we are able to make out words (the is a classic example) we are then able to knock more letters off until we have basically cracked the code.

Double Transposition

Another form of code is what is known as a double transposition cypher. What is happening here is that the letters aren't actually being changed, as is the case with the above two cyphers, but rather the position of the letters are being shifted around. So, for instance, we have the phrase 'Attack at Dawn'. The transposition cypher will then produce the phrase ' taw natt adakc'. Now, what they have done here is create a matrix, placed phrase in the matrix, and then shifted the rows and the columns. The image from the lecture notes do well to demonstrate that:

So, as you can see from above, the key is a 3x4 matrix, that is three columns and four rows. Now, we also have the permutation, namely that rows 1 and 3 have been swapped, and rows 2 and 5 have been swapped. Also, columns 2 and 3have also been swapped. Mind you, with a matrix this small, it is pretty easy to crack, but when you have larger matricies, then it becomes much, much more difficult.

Code Books

The final one we will talk about here are code books. This is where you have a book that lists a series of words and a code next to that word. Normally the code that is used are a series of numbers. The most famous one of these happens to be a telegram known as the Zimmerman telegram. This was a telegram that was sent, in code, to the Empire of Mexico by Germany during World War I. The telegram basically said that if Mexico joined the war on the side of the Germans, then when the Germans won the war they would receive portions of the United States as a reward. Not surprisingly, the telegram was intercepted, and it happened that the US also had a copy of the code book. Well, they deciphered the code, read the telegram, and not surprisingly declared war on Germany shortly after.

Anyway, here is an image of the Zimmerman Telegram:

And since we mentioned it above, here is a picture of the Enigma Machine.


In my next post, we will continue looking at cryptography, and some rather basic techniques that are used today.



Creative Commons License

Basic Cryptography by david.sarkies@internode.on.ent is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. This license only applies to the text and any image that is within the public domain. Any images or videos that are the subject of copyright are not covered by this license. Use of these images are for illustrative purposes only are are not intended to assert ownership. If you wish to use this work commercially please feel free to contact me

Monday, 21 January 2019

Storage for the Masses

No, I'm not going to be talking about data farms, or those huge buildings in the middle of nowhere that simply exist of hold all of your private data that the government has been slowly collecting (namely because you have been willingly giving it to them by basically publishing your entire lives online). I'm not even going to be talking about cloud services, namely because that will be covered somewhat later. No, I'm just going to be boring and talking about the storage devices in your computer.

We've already spoken about memory, which is basically volatile memory - mostly - so the storage devices we will be looking at now are generally referred to as non-volatile memory, that is that it doesn't matter whether you turn your computer off, the data will still be saved.

I'm not going to be going too far back here, you know back in the days when basically everything was stored on paper cards, and you had to keep them in a specific order because if they got out of order then the program wouldn't run (and it was a nightmare when you were carrying a huge stack of them, and you tripped and fell). There were other forms, but I will mention a few here:

See what I mean - By ArnoldReinhold - Own work,


Tape: Ironically, this is still in use today. You generally don't find many tapes available, but back in the glorious eighties they were everywhere. In fact kids like me would have draws full of them containing songs that we had copied off of the radio, and some of us even had computer games on them. Look, back then we complained that tapes were slow, but that actually wasn't the problem - they were just as fast as disks - it's just that they were read sequentially, so if you wanted to load something off the tape you had to wind it, or rewind it, to the appropriate spot. Ditto if you wanted to save something, and you had to make real sure that it didn't accidentally save over something you already had on there. There are others, such as configured the tape deck, but I might come back another time since I consider this medium is actually quite fascinating.

Floppy Disks: Another medium that is basically obsolete. They were called floppy, as opposed to hard, because the actual recordable medium was quite floppy. Like tape, they store data magnetically, but not sequentially. They are the other form of storage - Random Access. The problem with these was that, well, they really couldn't hold all that much, and the older ones, such as the 5.25", could be damaged if you didn't watch out. They solved this with the 3.5" by placing the disk in a hard outer shell, and had a sliding piece of metal to cover the hole. Once again, this is also something I might return to at a later date.



Hard Disks: These are still in use today, and are characterised by the fact that the medium in which the data is written is hard. They store data magnetically and it is also read randomly. These days hard disks are pretty cheap, and can literally hold terrabytes of data (a 1 Terrabyte hard drive is now less than $100.00). The problem is that they are slow, very slow. Being random access, there is also a tendency for the data to fragment. Initially everything is stored in sequence, however once sections are deleted, the computer will go and write over that section with other data. All of the sudden your data is scattered all over the disk.

Modern hard drives are composed of platters, sort of multiple disks sitting one on top of the other. Data is stored by setting the magnetic polarity of the section of the disk, so it is N - S for 1 and S - N for 0. There are actually two heads on the arm, one for writing and one for reading. Hard drives platters are also divided into tracks, and sectors, that is the track is the distance from the centre, and the sector is the segment of the hard drive. So, the hard drive locates which platter the data is on (the platters are double sided), then seeks the track, and then the sector. This does pose a problem when the data is spread over multiple platters.

Anyway, here are some images to help you make sense of what I was talking about, firstly what the inside of a hard drive looks like:

Well, it looks as if hardrives do have multiple spindles. Anyway, an image of how the data is stored:

Finally, I felt that a diagram of the track and the sectors is much better than trying to explain it (while it says 'floppy drive' it is the same for a hard drive):


The other problem with hard drives (and this probably applies to the others that I have mentioned) is that it is mechanical, which means that they basically rely on moving parts that are thus much more susceptible to decay. However, hard drives deal with this through what is called head parking. Basically, when the drive is not in use, or the computer is powered down, the head moves off the platter into what is effectively a 'car park'. They are also called landing zones. Laptops even have what is called an accellerometer, which detects if the computer is falling, and will automatically park the heads. Now, when the head actually comes into contact with the platter, this is referred to as a head crash, which can pretty much make the drive unreadable. One of the reasons we should always unmount our external hard drives is because when we are unmounting them, one of the things that it does is that it parks the head (and also finishes off anything that it happens to be doing). If you don't, the head will remain where it is, and if you drop the drive, the head could be damaged, and the data lost (as I know all too well).

The other issue is that it relies on magnetic media, which means that if a nuclear bomb is detonated nearby then the resulting electromagnetic pulse is basically going to wipe all of your data. Then again, if a nuclear bomb goes off in the vicinity, you probably have bigger problems to deal with.

Compact Discs: I'll include DVDs in this category as well, though the proper term is optical medium. Initially CDs were read only, and then you could get single use CDs (and DVDs) and then you could get multi-use ones. When we wrote something to the CD we would refer to it as 'burning', which is actually what is happened - a laser in the device was burning the information onto the CD. The original ones were made of plastic, but the later ones had a chemical coating that allowed the information to be rewritten. The data is stored by a series of pits. Where the medium goes into a pit it is a 1, where it doesn't, it is a 0, as such:


Unfortunately CDs are vulnerable to scratching, because if it is scratched suddenly the data changes. It also causes the CD to jump, as you may know if you have listened to a scratched CD (that is if you have ever listened to a CD). The other thing is that they are mechanical, which means the devices are slow and are prone to wear and tear. However, while they can hold substantially more data than a floppy disk, they still hold nowhere near as much as a hard drive.

The other thing about a CD is that it is sequential medium, which means data is store, and read, in order. The track on the CD is actually a spiral that winds down to the centre of the disk, much like the old vinyl records. Oh, and they aren't magnetic either, which means the data can survive an EMP pulse from a nuclear attack (which is why I would use CDs as a backup medium).

Solid State Drives: These drives are basically made up of a series of chips, much like the RAM circuits in the computer. The difference is that you can read and write to them. They are substantially faster than Hard Drives, however they are also much, much more expensive. The other thing with solid state drives is that they suffer from wear, which means that the more you write to them the more wear they suffer. This is solved by a process called wear leveling, in that the entire drive will be written to before the computer starts rewriting over older sections that have been 'erased'.

Also, unlike the other media, SSDs don't have any moving parts, but are controlled by a section of the drive called the controller. The controller actually determines the speed of the drive, and makes decisions on how to read, write, and clean up data that is on the drive. The drive uses a series of electrical cells that are divided into grids, and these grids are separated into pages, and these pages (where the data is stored) are then divided into blocks.

SSDs don't actually write over the data as other drives do, but rather they search the drive for pages that are no longer being used, and make sure that the surrounding pages are also not being used. They then basically blank them, and then write the data onto the blank section. It is like a sheet of paper full of scribble - you simply can't write over the scribble and hope it remains legible. Instead, what you do is you rub the scribble out, and then write onto the paper (you still write things on paper, don't you?).

There is also a type of drive called a Hybrid, which is bascially half an SSD and half a hard drive. I won't really go into any more detail because they seem to be trying to get the best of both worlds, but in reality are only getting half of none. In reality, you might has well have an SSD and a hard drive in your machine (like my laptop has).

Staying with SSDs, you have different types (of course), and that is usually to do with the number of layers. Single layers really only have two states, 0 and 1, and a lower threshold voltage. Remember how I mentioned that SSDs are prone to wear? Well, that is the threshold voltage, and the lower the better. You then have the dual layer, which has four states: 00,01,10,11. Then there is the triple layer, which has eight states: 000, 001, 010, 011, 100, 101, 110, 111. Of course, there is even that quad layer, which has sixteen states, but I suspect (or hope) you get the picture. Anyway, the more layers, thee higher the threshold voltage, which means the more likely they are to wear. Those flash drives you use to transfer data are actually triple layer SSDs, which means that they are actually quite prone to wear (which is why they are comparatively cheap).

Now, it seems that the term 'Threshold Voltage' has been bandied around without saying anything about what it actually means. Well, in technical terms, it is the amount of voltage that is required to force a connection in a transistor. Basically, in simple terms, it is the amount of force that is required to open the gate. Now, this is important for SSDs, because the deeper the layers, the more force that is required to open the gates. As such, while these deep layer SSDs (I should call them flash drives, because that is what a flash drive actually is, a three layered SSD) may be able to hold more, more force is actually required to 'open the gates' and as such they are more prone to wear and tear.

So, the SLC has one data bit per cell, has the fastest writing speed, the longest life, and is the most expensive, the MLC (or dual layer cell) has two bits, the TLC has three bits, and the QLC has four bits and is the slowest, has the shortest life, and is the cheapest. There is also an eMLC, called an Enterprise Multilevel Cell, which is actually more robust than your typical MLC, but they are generally only used in the commercial world. Also, SSDs are used in your mobile devices, and are generally TLCs (namely due to the cost, but they also are nowhere near as bad as the QLCs).

I probably should say a few more things about the controller. Basically it is a processor in the SSD that executes firmware level code, and manages the SSD. However, in a hybrid it also performs the function of managing the hard drive as well. Still, your typical hard drive also has a controller which pretty much performs the same function as the SSD controller. It's sort of like the warehouse, archive clerk that knows where everything is, and has his own system for finding it (which is why you can't sack him).

They also perform other functions, such as correcting errors, wear leveling, utilising a cache for items that are being retrieved, and also noticing and dealing with bad blocks. In some of the more advanced systems, it also performs encryption functions. The SSDs are also over provisioned (which means they have more than what is advertised) to provide space for the controller to perform its functions.

Understanding the Statistics

There are a few things that we need to understand about drives when we are looking at their states (though once again, these are never actually printed, you have to dig around for them). One of them is the 'burst transfer rate'. This is the speed at which data is moved between the drive's controller and the rest of the computer while the 'Sustained Transfer Rate' is the speed at which data is moved from the platters and into the controller. While the burst transfer rates are generally faster, the sustained transfer rate is actually more indicative of the drive's performance. Burst rates aren't going to be sustained if there are bottle necks in the PC, and if the data is not sequential on the hard drive.

Spindle Speed: not really important, but hard drives spin faster than CDs, namely because CDs aren't secured in place in the same way that hard drive platters are. Desktop hard drives spin faster than laptop hard drives because with laptops you have power considerations, particularly if it is unplugged. Finally, server hard drives spin faster than desktop hard drives because, well, of the noise factor. Sure, if you can deal with the noise, then go your hardest, otherwise just enjoy the sound of silence.

I mentioned a difference between the way data is stored on the CD and a hard drive. The reason being is that they are read differently. CDs, particularly the older ones, have a spiral coming out from the centre, and they work on what is called 'constant linear velocity'. However, the hard drives have concentric circles coming from the centre, and are divided into sectors. This is 'constant angular velocity' and with that you find that the data density at the end tends to be greater than that closer to the centre. With the CAV the speed stays the same whether near in centre or the edge, but with CLV, the speed changes depending on how close, or far, you are from the centre.

IOPS: This is basically Input/Output per second, and measures how fast the driver can perform read/write requests.

Throughput: Is the speed that data is transferred into or out of the device and is measured in bits per second.

Latency: Here is that word again - it seems to appear everywhere, and pretty much measures how long it takes for a device to being a task.

I'll finish off with another screen, which is an IOPs test comparing a Hard Drive and a Flash Drive.

Notice that there are a number of tests performed. First is 16MB, which is a large sequential file. The next is 4K, which is a small random file, and the final is 512B which is also a random read, but the data is more scattered. This is the IOPs measurement, but you also have them for throughput and latency.



Creative Commons License

>Storage for the Masses by David Alfred Sarkies is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.This license only applies to the text and any image that is within the public domain. Any images or videos that are the subject of copyright are not covered by this license. Use of these images are for illustrative purposes only are are not intended to assert ownership. If you wish to use this work commercially please feel free to contact me

Wednesday, 16 January 2019

A Comedy of Errors - Mistaken Identities



While I have seen this play before, but then again that is pretty much stating the obvious when it comes to the more popular Shakesperian plays, this was the first time that I saw it in a more traditional sense. Sure, the last version that I watched was fairly traditional, sort of, but this version was much, much more farcical, at least as far as I can remember. Okay, I might not have been cracking out laughing, but despite having had a long day, and feeling a little exhausted by the time I had arrived at the theatre, this version just had this ability to really pick me up and draw my into the action - then again a good Shakespearean performance, particularly a comedy, is certainly going to have some audience participation.

The last version that I saw was a version by the Bell Shakespeare Company, and honestly, I am sort of getting sick of their productions, particularly since they are becoming ever more experimental. In fact, a part of me feels that one is starting to stretch the definition when it comes down to describing them as Shakespeare, if their last production is anything to go buy (and since it was Julius Caeser, I could say that they basically butchered one of my favourite plays).



Sure, the plays executed themselves the way the plot unfolds however this one was much, much more amusing. The Bell version had a backdrop of a very seedy city, that happened to be Ephesus, whereas here the suggestion was that the setting was in the Ottoman Empire. Okay, that put me off a bit namely because I always was of the impression that the story was set back during the Roman times, namely because the cities of Ephesus and Syracuse no longer exist. However, one cannot be too harsh, particularly since I'm not entirely sure Shakespeare would have been all that particular when it came to costumes.

So, let us get down the the plot - basically the background is that this man was on a ship with his wife, who was heavily pregnant. She ended up giving birth to twins, and then the servant also gave birth to twins. Well, the ship had a mishap in that it hit a rock and split in two, and basically everybody was separated - in particular the twins. However, they were rescued but remained ignorant of each other's existence. So, this is what is explained to the duke (or should we say Pasha, since it was supposed to be the Ottoman Empire), namely because this man had arrived in Ephesus, however since he had come from Syracuse, and Ephesus and Syracuse were at war, he was captured, put in prison, and sentenced to death.


Meanwhile, Antiphonus and his servant Dromio arrive in Ephesus, and are immediately warned to be careful since it is clear that they have come from Syracuse, and the problem is that the duke is not all that happy with the Syracusians, so it would be best if they hide their identities. Little beknownst to them, there also happens to be an Antiphonus, and Dromio, living in Ephesus. In fact, they are twins, it is just that neither knows of the other's existence. As such, the stage is literally set for what is going to be two and a half hours of mistaken identities.

Look, this isn't anything that Shakespeare conjured off the top of his head, namely because the whole idea was borrowed from the Roman playwright Plautus - namely he Machimanus Twins. However, as is the case with Shakespeare, he adds a lot more to the story so that it really begins to shine with his brilliance and style. Okay, I've made numerous comments in the past about how unoriginal Shakespeare happens to be, but the problem is that Shakespeare is a far cry from a Hollywood director. Sure, I've been one of those critics, however the thing is that not only has his plays managed to survive and remain within our conscious memory, but they have gone as far as to become a part of the Western Canon.



One of the things that stood out with this play was simply how farcical the whole thing was, so the company that performed it hammed it up to no end. In fact having a couple of musicians sitting at the back of the stage just added to charm of the production. This was also one of those productions where the instruments were also used to produce sound effects - such as when people punched, kicked, and slapped others. This means that as well as being a farce, it is also could be considered slapstick, which is something that the previous version seemed to lack. Then again, Bell Shakespeare does seem to be a lot more serious with their productions than this particular group, which is probably why I liked it so much better.

The thing is that having twins that not only have the exact same name, but their servants also having the exact same name, is ridiculous, but I guess this is the nature of Shakespeare's plays. The ironic thing is that there doesn't seem to be much in the way of stage direction either, so in many cases it is basically left up the the producers to use their imagination, and I guess this is where the creativity comes in. Okay, we should remember that since this is one of Shakespeare's comedy's, none of the character's die. Well, not quite though I'm not entirely sure if the execution scene at the beginning of the play is actually in the play.



Of course, we have the fact that Antiphonus of Ephesus is actually married, so of course when his wife sees Antiphonus of Syracuse, she automatically thinks that it is her husband. Well, he's a little baffled, but does end up playing along with it, which leads to the situation where her husband suddenly discovers that he is locked out of her house, and no matter how hard they try, they simply cannot bust through the door. Then of course, we have confusion over gold chains, money that seems to have gone missing, and the fact that both parties miss each other by minutes, or even seconds. Of course, like a lot of Shakespeare's plays, there is the big reveal at the end, which it becomes blindingly obvious what was going on, and suddenly everything now makes sense - though of course we knew it all along, it is just none of the characters did.

The interesting thing is that there is also this generalised statement that at the end of a Shakespearian tragedy everybody dies, which at the end of a comedy everybody gets married. This isn't the case here. Then again, that is basically one of those broad sweeping statements that doesn't apply in all situations. The other thing is that this is one of Shakespeare's earlier plays, so it seems to lack the depth and the insight that some of his later plays invoke, such as the contrasts between the city and the country. Instead, what we have here is simply a tale of confusion that seems to be sending everybody completely insane, until they realise what is actually going on. 




Creative Commons License

A Comedy of Errors - Mistaken Identities by David Alfred Sarkies is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. This license only applies to the text and any image that is within the public domain. Any images or videos that are the subject of copyright are not covered by this license. Use of these images are for illustrative purposes only are are not intended to assert ownership. If you wish to use this work commercially please feel free to contact me

Thursday, 10 January 2019

Making it Accessible

One of the interesting things that I have discovered is that most of the AI products that we have today were originally developed to assist disabled people. Okay, they aren't actually AI in the Skynet sense of the word (and there has been instances where such AIs have been turned off, such as the one who went onto Twitter and suddenly became a racist, or the Facebook AIs that suddenly invented their own language to communicate - though that story is false). These are actually called assistants, and technology such as facial recognition were actually developed in that way, in a sense to allow blind people to effectively see.

So, accessibility is actually pretty important, particularly when you come to developing applications for the government (it is a legal requirement) or for major organisations. Accessibility opens the application up to much more people, and this isn't just having screen readers for blind people, and making sure that any pictures on your site have alternate text (which I am bad at putting down by the way), but also developing your website so that screen readers aren't spewing out garbage and wasting people's times. Oh, and the voice input, that is actually another accessibility option (though it sure beats using the keyboard at times).

Thus, websites are measured by how accessible they are, not just for ordinary people, but also for those with disabilities. In fact, you can be assured that at least one stage in our life each and every one of us is going to have a disability of some sort.

We all seem to assume that people with disabilities are, well, blind people, but it goes beyond that. Having subtitles on those Youtube videos, as well as a transcript, can help deaf people. Then there are those who don't know how to use a computer, or those with antiquated systems, and systems that are in remote or regional areas (where the internet is nowhere near as good). In fact it is estimated that 30% of people in remote areas have images turned off.

You probably can't get more remote than this - Dave Morgan Creative Commons.

We also need to consider people in lockdown environments where web access is restricted, or where they are in areas that are noisy or subject to glare. We need to take all of this into account. Not everybody is going to be sitting in their little office with a controlled environment and open access to the internet.

Welcome to Hell

Coming back to blind people, it isn't just the fact that they can't see images, there is also this issue with, well, the mouse and the keyboard. If they can't see, how do you think they know where the mouse pointer is? Okay, keyboards can be solved through the use of braille, but then there is also the voice activated commands which have reached a reasonably sufficient technological level. As for links, make sure that they are actually meaningful, as opposed to simply using 'here' (which, once again, I'm quite guilty of).

For people with low vision we should try to avoid using text within graphics, namely because they are more likely to increase the size of the text to be able to read it (or use some form of magnifying device). As for people with colour blindness, try not to use colours that mean that they pretty much won't be able to see anything, particularly when dealing with text and background colours.

Like this particular one

Another thing to consider are people who have mental disorders, and there are a whole range of them. Okay, my brother, who has a brain injury, seems to be able to use a computer just fine, going as far as being able to work out how to edit a Wikipedia page. However, he is the exception, particularly since he has grown up using a computer and using them is literally second nature. Hey, he is even able to use Linux. However, not everybody is in his category so we need to be able to take them into account.

In many cases, the principles that we have been exploring over this series pretty much apply here as well. Keeping things short and simple, and explaining things without too much, or any jargon. Actually, just drop the jargon - it might be difficult, but it is important, very important. Also, make sure that there are warnings and ways to back out of actions that could result in serious complications.

Oh, and on the jargon bit, maybe drop the sarcasm or the colloquialism. People from different cultures may, and most likely don't understand any of them. Actually, even people from other English speaking countries don't understand our Australian slang - particularly sarcasm.

Anyway, there is a lot of resources to look into on this issue, and this post was really just a wrap up of the User Centered Design series, and to bring your attention to these issues. In fact, taking them into account may just open your product up to many more users.
Creative Commons License

Making it Accessible by David Alfred Sarkies is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. This license only applies to the text and any image that is within the public domain. Any images or videos that are the subject of copyright are not covered by this license. Use of these images are for illustrative purposes only are are not intended to assert ownership. If you wish to use this work commercially please feel free to contact me

Tuesday, 1 January 2019

Plugging Things in and Pulling Things Out - Upgrading

There was a time in the distant past (like the 90s and early 2000s) when we had this desire to upgrade and expand our computers. Well, maybe that is still the case, but I guess that also related to those of us who were hard core gamers and always wanted a decent computer to play all the top notch games, particularly if they required graphics rendering. This doesn't seemed to be as much of the case these days, in that it is really only those incredibly dedicated gamers that want the top notch machine, while many of us are happy with a computer that works (and in my case simply runs a c64 emulator or dos box).

Anyway system and peripheral expansion is still something that needs to be considered, even though desktops really only exist for special purposes such as video creating and game playing - there really isn't a huge scope for expanding laptops, and they are so fiddly that there really isn't a point. Moreso, pretty much all of the external devices use USB these days, however one thing we need to remember is that not all USB wires are the same, and even though USB is, well, universal, there are still alternatives out there, such as fire wire.

Another thing that we need to consider are inboard connectors such as ATA, SATA, and SCSI. Oh, and we can't forget wifi and bluetooth, which are wireless protocols for expansion.

USB, Firewire, and Thunderbolt

Basically these are three different types of cables, with advantages and disadvantages. Sure, USB is everywhere, but there is still some scope for using Firewire and Thunderbolt. First Thunderbolt, which is a networking protocol that was developed by Intel. Initially it used its own connectors, but the latest iteration means that it can be connected to a USB-C connector. This is how Thunderbolt is currently being marketed:

Honestly, I wouldn't go by what the advertisement tells you. In fact that is something you should never do, but rather go by what other people have reported. The other thing is that the maximum they give you is always a theoretical maximum and can only be achieved in the rarest of circumstances.

Now, many of these protocols are measured at bits per second, or bps. So, when working out how much time it will take to actually transfer data we need to do some maths. Once again, remember, we can't use the maximum rate because it never operates at this speed. However, with this in mind, lets see how long it takes for a friend of ours - Jim - to copy 200 photos from, say, his camera to his hard drive.

Now, Each of these photos comprise of 3 megapixels, and each pixel comprises of 3 bytes (one each for red, green, and blue). So, we need to work out the size of these photos. 3 megapixels is 3000 pixels, at three bits each gives us 9000 bytes. Now, there are 200 of these photos, so that gives us a total size of 1 800 000 bytes, which translates to 1.8 GB (remember capital B for bytes, small b for bits - don't get confused).

So, let us use a USB 'superspeed' transfer first, and say that it is operating at 60% efficiency. So, the USB transfers data at 5 Gbps (Giga-bits per second), but it only operating at 60% efficiency, so that us 5 x 0.6 = 3 Gbps. Now, we have 1.8 GB of data to transfer, so converting the transfer speed into bytes we need to divide it by 8 (8 bits in a byte), which gives us 0.375 GB/s. Now, divide the size of the data by the transfer speed, so we have 1.8/0.375 which gives us 4.8 seconds. Not long.
Well, lets see how well it works with Firewire (and Apple development). Firewire 400 transfers at 400 Mbps, and say it is a little better and operates at 80% efficiency. So:

400*0.8 = 320 Mbps.

Now we need to convert the transfer speed into bytes:

320/8 = 40 MB/s.

To make it simpler, let us convert the data size into Megabytes:

1.8 * 1000 = 1800 MBs.

So, the next step is to divide the size of the data by the transfer speed, so:

1800/40 = 45s.

A lot slower isn't it?

Okay, now, lets through another problem into the mix. Say as a part of that transfer Jim is also transferring over a wireless network whose transfer speed is 60 Mbs and is operating at 25% efficiency?

So, first we work out the actual transfer speed:

60 x 0.25 = 15 Mb/s

Convert the transfer speed into bytes:

15/8 = 1.875 MB/s.

We already know the size of the data in Megabytes, which is 1800 MB, so we divide the data by the transfer speed:

1800/1.875 = 960s divided by 60 (60 seconds in a minute) gives us 16 minutes. Well, it seems that this is going to cause a bit of a bottle neck. However, once we had worked out the actual transfer speed, we could already compare that with what we already had so we didn't really needed to go further, unless we actually wanted to know how long it would take.

So, now the question comes down as to why don't they operate at their peak speeds. Well, there is one obvious answer - marketing. Isn't it the case that advertisers are always going to talk their product up so that it appears to be better than it really is. Honestly, while there is always a desire to see 'truth' in advertising it isn't something that you are always going to get.

But, there are other factors as well, such are wear and tear, and environmental conditions such as interference, or even just a plain hot day. The other thing is that not all bandwidth can simply be used for data, it needs to be shared with other things. Remember that USB cables also work to power some of these devices, and some bandwidth also needs to be reserved for other factors.

System Expansion

Okay, there was a time when motherboards were really basic, which meant if you wanted graphics then you needed a graphics card. In fact if you wanted Wi-Fi, or even to hook your computer up to a network you also needed a special card. Actually, come to think of it, I had to go down to Officeworks a few months back to purchase a new wifi card because my previous one had pretty much gone kaput (it's actually great using that word in its original context).

However, what is better - onboard integration or simply adding a new card. Well, my motherboard has it's own video port, but my Dad included a video card when he built the system, no doubt for 3D rendering needs (which I suspect such an old motherboard really couldn't handle). However, by integrating these systems into the motherboard, it actually frees up the PCI (which stands for peripheral component interconnect) slots for other things. Take the wifi card for example - they are so ubiquitous these days, that it simply is a waste to have to use one for a wifi, or even a network, card. There is also the issue of speed, but the thing is that if you want better graphics that your motherboard can supply, then you are going to look for other options.

Now onto the connectors. The main way peripheral devices are connected is either through PCI or ATA. ATA stands for Advanced Technology Attachment, and originally it would run in parallel. Actually, my storage devices are connected to the motherboard using ATA cables, and honestly, they are an absolute pain in that they pretty much take up most of the space inside the box - if you actually want to do anything you literally have to remove them. Another thing with the PATA cables is that they work on a bus topography, meaning that multiple devices could be connected to the same cable. However, there was no scope for simultaneous usage. In fact, you had to label one of the drives 'the master' and the other 'the slave' (blame the computer scientists, not me).

Well, they've been replaced with what is called SATA, or Serial ATA. SATA uses thinner cables, and also operate point to point, which means that separate cables are used for the devices (though I suspect that the mother board then needs to be configured to accept multiple devices. Another thing is that they also allow for hotswapping. As for speed, well, they're pretty fast, with the third generation transferring up to 6 Gb/s.

With regards to other components, we have PCI (peripheral component interconnet), and AGP (Advanced Graphics Port). PCI is what is termed as a 'legacy' component, which basically means that while devices aren't made for them anymore, they are still kept because there are still a lot of older devices out there. In fact many motherboards will still have a single PCI slot on them. They tended to be configured in a bus topology and their speeds were around 133 MB/s for both read and write.

The AGP is a later addition that basically connects straight to the CPU (or the northbridge, which you generally don't find in modern computers, but basically was the gateway between the CPU and the rest of the computer - the memory would go directly into the Northbridge, as did the graphics card, which the rest had to go through the southbridge and then into the northbridge). This pretty much speeds up the graphics cards because they aren't fighting over bandwidth with other peripherals, whether they be inside, or outside, the computer.

Now, the architecture is slightly different in that we have the PCI express standard. This is also point to point, but slightly different. While they don't use the same lanes, they do share lanes. One will be labelled, say PCIe 32, which means the card plugged in there can use 32 lanes. The next one is labelled PCIe 16. Now, if you also have a card plugged into the PCIe 16, the card in the 32 lane slot can only use 16 of those lanes, because the other card is using the other 16. There are also PCIe 8 slots, but they don't actually share lanes with those two (but rather with other slots).

Bluetooth & Wifi

At first I though bluetooth was a thing of the past because, well, I never use it. Then again that's because I don't have an earpiece for my phone, nor to I have one of those fancy new cars. Actually, I don't even have a car. As it turns out blue tooth is still regularly used. In fact I used it to connect a wireless keyboard to my tablet because it is so much easier to write on a keyboard than it is on those stupid screen keyboards.

Basically you connect devices by pairing them. Actually, there was a time when people would use bluetooth to pick up other people in the bar, though I sort of wonder how they actually knew who was who, especially since you couldn't use it to send messages. What you could use it for was data transfer, and I would play around with it with a friend by pairing our devices and trading songs.

Now, not all devices are the same, and bluetooth does operate on different frequencies. The other thing is that bluetooth doesn't have a huge range, from less than a meter to about 100 meters. These devices are divided into classes, namely class 3 being the shortest and class 1 being the longest. The transmit power is also less for the class three as opposed to the class one. The other thing to be aware of is that if you want to communicate with a class 1 device at a distance you also need a class 1 device, however that doesn't matter when you come into the class 2 range. Most headsets and mobile devices are class 2. Also, if you have devices of different classes, they will tend to revert to the weaker class.

The other thing is that bluetooth is made up of profiles, as such:

Advanced Audio Distribution Profile (A2DP): This is usually used for the headphones due to the high quality of the audio.
Cordless Telephony Profile (CTP): If you have a cordless phone, then this is the profile that is used to connect the phone to the base station (I have such a phone, but I never use it).

Dial-up Networking (DUN): This is used to turn your phone into a modem, though I generally just use the the 'wireless hotspot' function.

File Transfer Profile (FTP): This is the protocol that my mate and I would use to transfer songs.

Hands-Free Profile (HFP): This is basically used in those fancy, bluetooth enabled, cars (though I could never get it to work when I was actually driving one of those cars).

Human Interface Device (HID): That keyboard I told you about? It uses this profile.

Headset (HSP): This is the profile for those earpieces you see about the place.
LAN Access (LSP): This is another way for turning your phone into a modem.

I mentioned that you have to pair bluetooth devices, and that is correct. However, when you have bluetooth enabled, if there is the ability to pair, then it will connect. However, if it does connect then both ends need to give permission to pair. This is why when you turn bluetooth on, you can see (or you used to be able to see, since people don't leave their bluetooth on anymore) the other devices within range. This pairing is not always the case though, such as with keyboards where you can realistically only pair from one end.

As for Wifi, well that works on what is known as the 802.11 standard, though this is broken up into a number of standards, such as 802.11a, 802.11ac, 802.11b/g/n. I'm sure you get the picture.

Anyway, the letters at the end pretty much tell you the generation, with the latest being the ac, which was developed in 2013. Wifi has become faster over time, but in the early days (around b, which was 1999) it had problems with interference from other devices. There were also issues with it being blocked by, well, walls.

Wifi operates on one of two frequencies, being 2.5 Ghz and 5 Ghz. The latest rendition operates on the 5 Ghz frequencies. 2.5 offered the better range, however the problem was that that frequency is actually quite crowded so you would suffer from interference. 5 Ghz doesn't have as greater range, but it does offer much better throughput. Also, there isn't as much interference.

Another thing to consider when discussing frequency is channel width, and this is measured, not surprisingly, in hertz (usually Mhz). This is basically the width of the spectrum that has been allocated to the device. It also determines how large of a pipe there is to channel data. The problem with channels that are too wide is that they are more subject to interference from other devices.

A final thing to touch on here involves Beamforming. Initially wifi would simply blast waves away from the device in all directions, however modern technology means that when a device connects to the wifi router, what will happen is that the router will beam the signal to the device, thus created a much stronger connection.

Source www,birate.com

Creative Commons License

Plugging Things in and Pulling Things Out - Upgrading by David Alfred Sarkies is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. This license only applies to the text and any image that is within the public domain. Any images or videos that are the subject of copyright are not covered by this license. Use of these images are for illustrative purposes only are are not intended to assert ownership. If you wish to use this work commercially please feel free to contact me