1. Introduction to IT

Introductory meeting for students from non-technical departments.

It has as form of lecture, see: my slides.

So what are the aims of this course? Obviously the main aim is to work efficiently on a computer. Everyone uses a computer and supposedly you manage to use it quite well, because you just have to. Certainly all of you are able to do everything on a computer using some effort. So during this course we will try to minimize this effort. Our ambition is to teach you a few tricks is some aspects of using some popular applications so that you will use it even more efficiently. But obviously, if anyone is just starting his adventure with computers should not worry; such a person will simply learn more. We would like to teach you things which may become useful in nearly every job or just in your life. At least that is the idea behind this course.

We start with a short talk on how a computer and the Internet works, so that you know what are you dealing with. Next we will use some basic office applications. In particular we will learn how to efficiently format documents (you will see how comfortable it is to know how to do it really efficiently!). Next we will spend some time on spreadsheets, which can make your life easier when dealing with calculations and data analysis. In the meantime we will discuss the ingredients of a well-prepared presentation. Finally everyone will prepare a website out of scratch.

Moreover we will make use of this general rule of solving problems with computers. 😛

Now a few words on how to achieve a pass this course. We are serious, it is possible to fail to achieve a pass in this course. You will be given a mark which will be your result of the final test. During the test you will be asked to deal with three problems (editor, spreadsheet and webpage) in relatively short time. The exam will be taken only by those students who will have been absent without an excuse at most twice and will have achieved at least 50% of homework points. Homework will be assigned after each topic and actually you will be able to work on it partially during the classes. There will be a homework assignment of preparing a presentation and the result of it will be additionally counted into the final test. There will be a possibility to retake the test once. Example of problems which may appear on the test you can find here.

So let’s talk about the history of computers. Obviously people tried to simplify calculations since very long time developing some tools like an abacus (in Mesopotamia since 2700 BC!), or a slide rule (1620). Obviously claiming that those are the first steps to develop a computer will be an overstatement, but it shows a need which people had for a very long time. Also I should mention the mechanical theatre of Hero of Alexandria, the operation of which could be “programmed” to some extent. Those two ideas: computations and programmability met much later in the form of a computer.

By the way, the word “computer” was used in English for a long time. The oldest recorded usage is dated to 1613, but obviously it meant something different, namely a person conducting calculations.

An important step in the area of programmability was made in 1801 by Jaquard inhis loom. Using so called punched card (a card with holes) one was able to control the loom so that a desired pattern was woven. Punched cards were actually also used to input data in actual computers, and even in the beginning of my college years there were still tones of such cards lying around in my department, and were used as a source of paper for short handwritten notes. 😉

Not long after Jaquard, in 1837, an English mathematician from Cambridge, Charles Babage invented actually a computer. He did not create it, it was only an invention. He called his idea an Analytic Engine. The Analytic Engine was supposed to carry out any given sequence of calculation orders. It has never been created (actually it was very complicated and was supposed to be powered by a steam engine), but as an idea it was later realised in a form of computer.

Remaining in the area of ideas, I have to tell few words about Alan Turing. He is often called a father of computer science and not without a reason. His most important work dates to 1936 r., before the first computer was created. This work is a formal idea called Turing machine. The Turing machine is a concept of an imaginary (and by assumption unreal) but very simple device. This devise, though simple, can calculate exactly everything which can be calculated (by human or by computer) assuming having any quantity of time and resources. Thanks to this concept theoretical computer science (dealing with questions what can be calculated, how hard can problems be, how long will it take to solve a problem) bloomed. Alan Turing was a genius, who was able to imagine in a very simple way something which at that time did not exist.

There are some controversies on which of the first computer-like devices was actually the first computer. One can assume it was ABC (created in 1937) or maybe Z3 (1941). Wider known were ENIAC (1946 ) and ESDAC (1949). But all of them were really big. Those machines occupied few rooms or few big cabinets at best. The way to miniaturization was open.

We have walked this road in quite an amazing pace. Transistors were invented, which made it possible to make things smaller and smaller. Computers were mass-manufactured (started by IBM), programming languages were invented. Computers finally fitted in a desktop space (PC — personal computer), operating systems were invented (DOS), and finally more convenient interfaces appeared, like a keyboard, a screen, a mouse and a graphical interface. And that is how we ended up with smartphones and tablets.

How fast this miniaturization is happening? Very fast. So far more or less in accordance with the empirical Moore’s law, which states that the number of transistors fitting in a chip doubles every two years. In other words it grows exponentially. But the Moore’s law successfully describes also other data: computing power compared to costs, sizes of hard drives, net capacity, etc. Will it be like this forever? Probably not, Intel claims that because of strong technical problems we have already slowed down to 2.5 years instead of two. But it may be that we will in the future develop some new technologies and bypass those problems.

Let me now give a short reminder starting with this nerd joke: there are 10 types of people: those who understand binary notation and those who do not understand. Obviously 10 is a binary notation of 2, let us recall how it works: we would like to use digits 0 and 1 only, so 0 means 0, 1 means 1, and then we need already two digits, so 2 is 10, 3 is 11, 4 is 100, 5 is 101, 6 is 110, an so on.

One bit (an abbreviation of BInary digiT), is simply 0 or 1. A byte (1B) is a sequence of 8 bits. And going further we can use SI prefixes, so 1 kilobyte (1kB) is 1000 bytes, 1MB (megabyte) is million bytes, 1GB means billion bytes and 1TB means trillion bytes. But there may be a misunderstanding there. Since computers use the binary notation, it is more convenient to use powers of 2. Notice that $2^10=1024$ is close to $1000$. So sometimes by one kilobyte (sometimes written then as KiB) mean 1024 bytes, by a megabyte (MiB) they mean $1024KiB$ and so on. So between a gigabytes calculated in those two different ways there is a difference of about 70 megabytes! It definitely can lead to a misunderstanding or even an abuse. Imagine you buy a HD of declared volume of 100GB. If the manufacture calculated this volume by the first method so when you insert it in your computer you may find out that actually it has about 93GB…

Let’s now talk about a classic computer. People often say that the heart of a cumputer is its CPU (central processing unit). CPU is an electronic unit, which carries out some simple operations on sequences of binary digits (usually called words). Those words are sequences of 32 or 64 bites depending of the CPU, so we have 32-bit or 64-bit CPUs. The biggest manufacturers of CPUs are Intel and AMD. A clock rate of a CPU is the number of operations in can carry out in a second. Just imagine how big are those numbers. Currently about 3.9GHz, which means that CPU manages to carry out almost 4 billions (really, billions!) operations per second. It is a number even hard to imagine.

A CPU can also have few cores, which means more or less that in can carry out at the same time few operations. Because of management and synchronization problems it does not mean that a CPU with 4 cores works 4 times faster.

One of the biggest challenges in inventing more and more densely packed (therefore faster and faster) units, is that increasing clock rate means increasing heat dissipated by the unit. Since we deal with a densely packed huge number of transistors a unit heats very fast to a high temperature. CPUs are therefore encased in radiators and have attached their own coolers. Despite it they heat up to about 60-65 C. Every CPU has also its maximal temperature above which something bad can happen, meaning it can simply melt down…

The second crucial ingredient of a computer is its memory. The main problem with memory is the following: the bigger memory is, the more time it takes to write an information in a given place or to find a given place in it. And on the one hand CPU needs a very fast memory which can keep up with its incredible speed, an on the other hand the user needs big memory to store his or her data. Designing a computer people have managed to overcome this contradiction. In a computer you can find a whole sequence of different memory units: from the fastest memory which cooperates with CPU down to the slowest, but the biggest one in a form of a hard drive. The memory closest to CPU is called a cache memory. It is very fast but small and is responsible to know data which will be used by CPU and even predicting what data will be needed to prepare it. Even cache memory is divided into layers (L1, L2, sometimes even L3) first the fastest and smallest, then a bit slower but also a bit bigger one. Still those memory units work incredibly fast.

The next memory in the sequence is called operating memory (due to a commonly used simplification we usually call it RAM). It is still not responsible for remembering something for a long time, but only to collect larger chucks of data which will be needed by cache memories and CPU. It is just an interchange station for data travelling from relatively slow hard drive to very fast cache memory. If a computer has small amount of operating memory it can be easily clog and everything takes a lot of time since CPU needs to wait for data.

The last place in this sequence belongs to a hard drive. It is responsible for storage of all data and has quite a big capacity. Recently 200GB seems to be the absolute minimum. Classic hard drive consists of a stack of spinning magnetic platters and heads moving along its diameter and magnetising or reading given circles of data called cylinders. I have said that a hard drive is relatively slow. It may seem slow for a CPU but actually it is quite fast. Usually it works with speed of 7400 RPM (rotations per minute). It is very fast, but if we leave a CPU waiting for every data for 1/7400 minute, our computer will be absolutely useless. CPUs need their data about million times faster.

The third main ingredient of a computer are so called input-output devices. For example: a mouse, a keyboard, a screen, a printer, external memory, and cards: a network interface controller, a video card or a sound card.

Many of those elements are placed on a so called main board of a computer. On your main board you will find your CPU, RAM ports, places for cards and build-in controllers. A main board is responsible for communication between them.

That is all I wanted to say about hardware so now let us move to software, which is somehow between the user and the hardware. The user usually uses an application, for example a browser or a word processor. All such applications do not communicate directly with the hardware. A working environment for them is provided by a very special piece of software called operating system. An operating system is responsible for:

  • assigning each application time on CPU,
  • managing the operating memory,
  • managing the access to the hardware, including hard drives.
  • sharing and protecting computer’s resources, providing authorization.
  • providing user an interface.

Usually on PCs Microsoft’s operating systems are used (i.e. Windows, which started as a graphic interface for DOS operating system, and since Windows 95 is fully grown operating system, with new version every couple of years), Apple’s OSes (on Apple computers) and finally operating systems from Linux family (Linux is a free software which means that everybody can use and develop it for free, and various companies, foundations and societies create so called distributions adding to a Linux kernel more software, e.g. Debian, Ubuntu, Slackware, Red Hat, Fedora, PLD, and many other).

I should also mention mobile operating systems designed for mobile devices like smartphones. Among those Android is the most popular. Android was designed by Google basing on a Linux-like kernel. The second place goes to Apple’s iOS, and Microsoft’s systems are far behind.

The usual interaction of a user with an operating system consists of an authorization process and using an interface which allows to run some applications and manage files and folders. Obviously this interface has a graphic form now, but computers used to have only text interfaces. Traces of such an interface are still visible in modern Windows in a form of the line of commands. Using it now is rather an entertainment than an important skill. You can find it searching for a command line. Try to use the following commands:

  • cd folder-name — changes current folder to the given one
  • dir — lists all files in the current directory
  • ping google.com — sends test data to a given server and measures answer time
  • tracert uw.edu.pl — lists a sequence of servers leading to the given one
  • cls — clears the command line window

Enjoy!