Saturday, November 5, 2016

History Of Information Technology

Introduction

Information technology has been around for a long, long time. Basically as long as people have been around, information technology has been around because there were always ways of communicating through technology available at that point in time. There are 4 main ages that divide up the history of information technology. Only the latest age (electronic) and some of the electromechanical age really affects us today, but it is important to learn about how we got to the point we are at with technology today.

Ages

Premechanical

The premechanical age is the earliest age of information technology. It can be defined as the time between 3000B.C. and 1450A.D. We are talking about a long time ago. When humans first started communicating they would try to use language or simple picture drawings known as petroglyths which were usually carved in rock. Early alphabets were developed such as the Phoenician alphabet.

Petroglyph
As alphabets became more popluar and more people were writing information down, pens and paper began to be developed. It started off as just marks in wet clay, but later paper was created out of papyrus plant. The most popular kind of paper made was probably by the Chinese who made paper from rags.
Now that people were writing a lot of information down they needed ways to keep it all in permanent storage. This is where the first books and libraries are developed. You’ve probably heard of Egyptian scrolls which were popular ways of writing down information to save. Some groups of people were actually binding paper together into a book-like form.
Also during this period were the first numbering systems. Around 100A.D. was when the first 1-9 system was created by people from India. However, it wasn’t until 875A.D. (775 years later) that the number 0 was invented. And yes now that numbers were created, people wanted stuff to do with them so they created calculators. A calculator was the very first sign of an information processor. The popular model of that time was the abacus.

Mechanical

The mechanical age is when we first start to see connections between our current technology and its ancestors. The mechanical age can be defined as the time between 1450 and 1840. A lot of new technologies are developed in this era as there is a large explosion in interest with this area. Technologies like the slide rule (an analog computer used for multiplying and dividing) were invented. Blaise Pascal invented the Pascaline which was a very popular mechanical computer. Charles Babbage developed the difference engine which tabulated polynomial equations using the method of finite differences.

Difference Engine
There were lots of different machines created during this era and while we have not yet gottent to a machine that can do more than one type of calculation in one, like our modern-day calculators, we are still learning about how all of our all-in-one machines started. Also, if you look at the size of the machines invented in this time compared to the power behind them it seems (to us) absolutely ridiculous to understand why anybody would want to use them, but to the people living in that time ALL of thse inventions were HUGE.

Electromechanical

Now we are finally getting close to some technologies that resemble our modern-day technology. The electromechanical age can be defined as the time between 1840 and 1940. These are the beginnings of telecommunication. The telegraph was created in the early 1800s. Morse code was created by Samuel Morse in 1835. The telephone (one of the most popular forms of communication ever) was created by Alexander Graham Bell in 1876. The first radio developed by Guglielmo Marconi in 1894. All of these were extremely crucial emerging technologies that led to big advances in the information technology field.
The first large-scale automatic digital computer in the United States was the Mark 1 created by Harvard University around 1940. This computer was 8ft high, 50ft long, 2ft wide, and weighed 5 tons - HUGE. It was programmed using punch cards. How does your PC match up to this hunk of metal? It was from huge machines like this that people began to look at downsizing all the parts to first make them usable by businesses andeventually in your own home.

Harvard Mark 1

Electronic

The electronic age is wha we currently live in. It can be defined as the time between 1940 and right now. The ENIAC was the first high-speed, digital computer capable of being reprogrammed to solve a full range of computing problems. This computer was designed to be used by the U.S. Army for artillery firing tables. This machine was even bigger than the Mark 1 taking up 680 square feet and weighing 30 tons - HUGE. It mainly used vacuum tubes to do its calculations.
There are 4 main sections of digital computing. The first was the era of vacuum tubes and punch cards like the ENIAC and Mark 1. Rotating magnetic drums were used for internal storage. The second generation replaced vacuum tubes with transistors, punch cards were replaced with magnetic tape, and rotating magnetic drums were replaced by magnetic cores for internal storage. Also during this time high-level programming languages were created such as FORTRAN and COBOL. The third generation replaced transistors with integrated circuits, magnetic tape was used throughout all computers, and magnetic core turned into metal oxide semiconductors. An actual operating system showed up around this time along with the advanced programming language BASIC. The fourth and latest generation brought in CPUs (central processing units) which contained memory, logic, and control circuits all on a single chip. The personal comptuer was developed (Apple II). The graphical user interface (GUI) was developed.

Apple 2

Important Books for Programming

I want to let you know about three great new books on agile you should read. Two of them are in the series I edit for Addison-Wesley; the third is by an author who previously wrote a book for that series.

Large-Scale Scrum by Larman and Vodde

Large-Scale Scrum” by Craig Larman and Bas Vodde is great for anyone looking to scale Scrum up to medium and large projects. It provides a contrast to the very heavyweight Scaled Agile Framework (SAFe), and “Large-Scale Scrum” comes with its own cutesy acronym, LeSS. In fact the subtitle of the book is “More with LeSS.”
The book defines two scaling models. First is standard LeSS, which Larman and Vodde say is typically used for projects with around five teams. It can certainly scale beyond there. But for much larger projects, the book also defines “LeSS Huge,” which the authors report having used on projects with over 1,000 people.
The book is organized as you might expect with chapters devoted to key Scrum topics such as the product owner, the product backlog, sprint planning, reviews and retrospectives, and so on.
I found the book to strike a perfect balance between being overly prescriptive and too general. You’ll leave the book with plenty of advice on how to scale a Scrum project. But you won’t leave feeling hamstrung by having too many rules placed on your teams.
In fact, the authors include a nice summary of LeSS and LeSS Huge rules at the end of the book, and it covers only three pages.

Developer Testing by Alexander Tarlinder

Early in my career as a programmer, I remember coming across the phrase, “You can’t test quality in.” I read this inan article that compared 1970s and 1980s U.S. automobile manufacturing to Japanese automobile manufacturing.
The author was saying the U.S. car manufacturers were producing cars of lower quality than their Japanese counterparts because U.S. car manufacturers were trying to test quality into their products. Only after the car was built would they test to see if it was high quality. If it wasn’t, they’d fix defects (say a poorly fitting door) to make the product higher quality.
This was in contrast to Japanese manufacturers who built quality into the process. A later colleague of mine referred to this by saying, “Quality is baked into our process.”
Quality is not something that can be added to a product. Trying to add quality after the product has been built would be like adding baking powder to a cake after the cake has been baked. It doesn’t work.
Alexander Tarlinder’s new book, “Developer Testing: Building Quality into Software,” teaches programmers how to bake or build quality right into the process. The book starts with fundamentals (what is unit testing?) but also delves deep into more advanced topics like testing with mock objects. Similarly, it adds to the debate about testing state vs. behavior.
The book comes in at just under 300 pages, but is encyclopedic in what it covers. This book is a must read for every programmer, even those who are already doing a great job at developer testing.

Strategize by Roman Pichler

Most agile processes are empty of any advice on forming a company or product strategy. Product backlogs or featurelists are just assumed to exist or to spring spontaneously from the mind of a product owner or key stakeholder. In his new book, “Strategize,” Roman Pichler fills this void in agile thinking.
Roman is a long-time Scrum trainer based in the UK. He has previously written books about the overall Scrum framework and about succeeding as a product owner.
In “Strategize,” Pichler covers how to form and then validate a strategy, including identifying the right audience for the product and delivering just the features they need. The book covers roadmapping, including addressing the unfortunate misconception that because a team is agile, they don’t need to know where they’re headed.
Pichler presents a very helpful roadmap selection matrix that helps identify what type of roadmap is appropriate for different types of projects. I’ve already put this to use in discussions with clients. And I’m becoming convinced that if a company had a bad experience with roadmapping in the past, it was likely because of doing the wrong type of roadmapping. This book’s roadmap selection matrix will fix that.
This book should be read by anyone involved in determining the future direction for a product or entire organization.

Categories of Hackers:

Categories of Hackers:



Script kiddie

Also known as skid, this kind of hacker is someone who lacks knowledge on how an exploit works and relies upon using exploits that someone else created. A script kiddie may be able to compromise a target but certainly cannot debug or modify an exploit in case it does not work.

Elite hacker

An elite hacker, also referred to as l33t or 1337, is someone who has deep knowledge on how an exploit works; he or she is able to create exploits, but also modify codes that someone else wrote. He or she is someone with elite skills of hacking.

Hacktivist

Hacktivists are defined as group of hackers that hack into computer systems for a cause or purpose. The purpose may be political gain, freedom of speech, human rights, and so on.

Ethical hacker

An ethical hacker is as a person who is hired and permitted by an organization to attack its systems for the purpose of identifying vulnerabilities, which an attacker might take advantage of. The sole difference between the terms “hacking” and “ethical hacking” is the permission.

Introduction to Ethical Hacking

Introduction to Hacking:

There are many definitions for “hacker.” Ask this question from a phalanx and you’ll get a new answer every time because “more mouths will have more talks” and this is the reason behind the different definitions of hackers which in my opinion is quite justified for everyone has a right to think differently. In the early 1990s, the word “hacker” was used to describe a great programmer, someone who was able to build complex logics. Unfortunately, over time the word gained negative hype, and the media started referring to a hacker as someone who discovers new ways of hacking into a system, be it a computer system or a programmable logic controller, someone who is capable of hacking into banks, stealing credit card information, etc. This is the picture that is created by the media and this is untrue because everything has a positive and a negative aspect to it. What the media has been highlighting is only the negative aspect; the people that have been protecting organizations by responsibly disclosing vulnerabilities are not highlighted. However, if you look at the media’s definition of a hacker in the 1990s, you would find a few common characteristics, such as creativity, the ability to solve complex problems, and new ways of compromising targets. Therefore, the term has been broken down into three types:

 1. White hat hacker

    This kind of hacker is often referred to as a security professional or security researcher. Such hackers are employed by an organization and are permitted to attack an organization to find vulnerabilities that an attacker might be able to exploit.

 2. Black hat hacker

    Also known as a cracker, this kind of hacker is referred to as a bad guy, who uses his or her knowledge for negative purposes. They are often referred to by the media as hackers. 

3. Gray hat hacker

This kind of hacker is an intermediate between a white hat and a black hat hacker. For instance, a gray hat hacker would work as a security professional for an organization and responsibly disclose everything to them; however, he or she might leave a backdoor to access it later and might also sell the confidential information, obtained after the compromise of a company’s target server, to competitors.