Robot

Tuesday, September 9, 2008 | | 0 comments |



A robot is a mechanical or virtual artificial agent. In practice, it is usually an electro-mechanical system which, by its appearance or movements, conveys a sense that it has intent or agency of its own. The word robot can refer to both physical robots and virtual software agents, but the latter are usually referred to as bots.[1] There is no consensus on which machines qualify as robots, but there is general agreement among experts and the public that robots tend to do some or all of the following: move around, operate a mechanical arm, sense and manipulate their environment, and exhibit intelligent behavior, especially behavior which mimics humans or animals.

Stories of artificial helpers and companions and attempts to create them have a long history, but fully autonomous machines only appeared in the 20th century. The first digitally operated and programmable robot, the Unimate, was installed in 1961 to lift hot pieces of metal from a die casting machine and stack them. Today, commercial and industrial robots are in widespread use performing jobs more cheaply or with greater accuracy and reliability than humans. They are also employed for jobs which are too dirty, dangerous or dull to be suitable for humans. Robots are widely used in manufacturing, assembly and packing, transport, earth and space exploration, surgery, weaponry, laboratory research, and mass production of consumer and industrial goods.

People have a generally positive perception of the robots they actually encounter. Robotic competitions are popular, and provide training as well as entertainment for technically-inclined students. Domestic robots for cleaning and maintenance and robotic toys are increasingly common in and around homes. Asians and Westerners have different expectations for the future of consumer robotics, but these expectations are generally positive. There is anxiety, however, over the economic impact of automation and the threat of robotic weaponry, anxiety which is not helped by the many villainous, intelligent, acrobatic robots in popular entertainment. Compared with their fictional counterparts, real robots are still benign, slow, dim-witted and clumsy.



Digital Camera

Tuesday, September 2, 2008 | | 0 comments |

A digital camera (or digicam for short) is a camera that takes video or still photographs, or both, digitally by recording images on a light-sensitive sensor.

Many compact digital still cameras can record sound and moving video as well as still photographs. In the Western market, digital cameras outsell their 35 mm film counterparts.[1]

Digital cameras can include features that are not found in film cameras, such as displaying an image on the camera's screen immediately after it is recorded, the capacity to take thousands of images on a single small memory device, the ability to record video with sound, the ability to edit images, and deletion of images allowing re-use of the storage they occupied.

Digital cameras are incorporated into many devices ranging from PDAs and mobile phones (called camera phones) to vehicles. The Hubble Space Telescope and other astronomical devices are essentially specialised digital cameras.


Classification

Digital cameras can be classified into several categories:


Compact digital cameras

Compact cameras are designed to be small and portable; the smallest are described as subcompacts or "ultra-compacts". Compact cameras are usually designed to be easy to use, sacrificing advanced features and picture quality for compactness and simplicity; images can usually only be stored using Lossy compression (JPEG). Most have a built-in flash usually of low power, sufficient for nearby subjects. Live preview is almost always used to frame the photo. They may have limited motion picture capability. Compacts often have macro capability, but if they have zoom capability the range is usually less than for bridge and DSLR cameras. They have a greater depth of field, allowing objects within a large range of distances from the camera to be in sharp focus. They are particularly suitable for casual and "snapshot" use.

Bridge cameras

Bridge or SLR-like cameras are higher-end digital cameras that physically resemble DSLRs and share with them some advanced features, but share with compacts the framing of the photo using live preview and small sensor sizes.

Fujifilm FinePix S9000
Fujifilm FinePix S9000

Bridge cameras often have superzoom lenses which provide a very wide zoom range, typically between 10:1 and 18:1, which is attained at the cost of some distortions, including barrel and pincushion distortion, to a degree which varies with lens quality. These cameras are sometimes marketed as and confused with digital SLR cameras since the appearance is similar. Bridge cameras lack the mirror and reflex system of DSLRs, have so far been fitted with fixed (non-interchangeable) lenses (although in some cases accessory wide-angle or telephoto converters cannot be attached to the lens), can usually take movies with sound, and the scene is composed by viewing either the liquid crystal display or the electronic viewfinder (EVF). They are usually slower to operate than a true digital SLR, but they are capable of very good image quality (with sufficient light) while being more compact and lighter than DSLRs. The high-end models of this type have comparable resolutions to low and mid-range DSLRs. Many of these cameras can store images in lossless RAW format as an option to JPEG compression. The majority have a built-in flash, often a unit which flips up over the lens. The guide number tends to be between 11 and 15.

Digital single lens reflex cameras


Digital single-lens reflex cameras (DSLRs) are digital cameras based on film single-lens reflex cameras (SLRs), both types are characterized by the existence of a mirror and reflex system. See the main article on DSLRs for a detailed treatment of this category.

Digital rangefinders

A rangefinder is a user-operated optical mechanism to measure subject distance once widely used on film cameras. Most digital cameras measure subject distance automatically using acoustic or electronic techniques, but it is not customary to say that they have a rangefinder. The term rangefinder alone is sometimes used to mean a rangefinder camera, that is, a film camera equipped with a rangefinder, as distinct from an SLR or a simple camera with no way to measure distance.

Information on digital rangefinder cameras specifically is here.

Professional modular digital camera systems

This category includes very high end professional equipment that can be assembled from modular components (winders, grips, lenses, etc.) to suit particular purposes. Common brands include Hasselblad and Mamiya. They were developed for medium or large format film sizes, as these captured greater detail and could be enlarged more than 35 mm.

Typically these cameras are used in studios for commercial production; being bulky and awkward to carry they are rarely used in action or nature photography. They can often be converted into either film or digital use by changing out the back part of the unit, hence the use of terms such as a "digital back" or "film back". These cameras are very expensive (up to $40,000) and are typically not used by consumers.

Line-scan camera systems

A line-scan camera is a camera device containing a line-scan image sensor chip, and a focusing mechanism. These cameras are almost solely used in industrial settings to capture an image of a constant stream of moving material. Unlike video cameras, line-scan cameras use a single array of pixel sensors, instead of a matrix of them. Data coming from the line-scan camera has a frequency, where the camera scans a line, waits, and repeats. The data coming from the line-scan camera is commonly processed by a computer, to collect the one-dimensional line data and to create a two-dimensional image. The collected two-dimensional image data is then processed by image-processing methods for industrial purposes.

Line-scan technology is capable of capturing data extremely fast, and at very high image resolutions. Usually under these conditions, resulting collected image data can quickly exceed 100MB in a fraction of a second. Line-scan-camera–based integrated systems, therefore are usually designed to streamline the camera's output in order to meet the system's objective, using computer technology which is also affordable.

Line-scan cameras intended for the parcel handling industry can integrate adaptive focusing mechanisms to scan six sides of any rectangular parcel in focus, regardless of angle, and size. The resulting 2-D captured images could contain, but are not limited to 1D and 2D barcodes, address information, and any pattern that can be processed via image processing methods. Since the images are 2-D, they are also human-readable and can be viewable on a computer screen. Advanced integrated systems include video coding and optical character recognition (OCR).

Conversion of film cameras to digital

When digital cameras became common, a question many photographers asked was whether their film cameras could be converted to digital. The answer was yes and no. For the majority of 35 mm film cameras the answer is no, the reworking and cost would be too great, especially as lenses have been evolving as well as cameras. For the most part a conversion to digital, to give enough space for the electronics and allow a liquid crystal display to preview, would require removing the back of the camera and replacing it with a custom built digital unit.

Many early professional SLR cameras, such as the NC2000 and the Kodak DCS series, were developed from 35 mm film cameras. The technology of the time, however, meant that rather than being a digital "back" the body was mounted on a large and blocky digital unit, often bigger than the camera portion itself. These were factory built cameras, however, not aftermarket conversions.

A notable exception was a device called the EFS-1, which was developed by Silicon Film from c. 1998–2001. It was intended to insert into a film camera in the place of film, giving the camera a 1.3 MP resolution and a capacity of 24 shots. Units were demonstrated, and in 2002 the company was developing the EFS-10, a 10 MP device that was more a true digital back.

A few 35 mm cameras have had digital backs made by their manufacturer, Leica being a notable example. Medium format and large format cameras (those using film stock greater than 35 mm), have a low unit production, and typical digital backs for them cost over $10,000. These cameras also tend to be highly modular, with handgrips, film backs, winders, and lenses available separately to fit various needs.

The very large sensor these backs use leads to enormous image sizes. The largest in early 2006 is the Phase One's P45 39 MP imageback, creating a single TIFF image of size up to 224.6 MB. Medium format digitals are geared more towards studio and portrait photography than their smaller DSLR counterparts, the ISO speed in particular tends to have a maximum of 400, versus 6400 for some DSLR cameras.

History

Early development

The concept of digitizing images on scanners, and the concept of digitizing video signals, predate the concept of making still pictures by digitizing signals from an array of discrete sensor elements. Eugene F. Lally of the Jet Propulsion Laboratory published the first description of how to produce still photos in a digital domain using a mosaic photosensor.[2] The purpose was to provide onboard navigation information to astronauts during missions to planets. The mosaic array periodically recorded still photos of star and planet locations during transit and when approaching a planet provided additional stadiametric information for orbiting and landing guidance. The concept included camera design elements foreshadowing the first digital camera.

Texas Instruments engineer Willis Adcock designed a filmless camera and applied for a patent in 1972, but it is not known whether it was ever built.[3] The first recorded attempt at building a digital camera was in 1975 by Steven Sasson, an engineer at Eastman Kodak.[4] It used the then-new solid-state CCD image sensor chips developed by Fairchild Semiconductor in 1973.[5] The camera weighed 8 pounds (3.6 kg), recorded black and white images to a cassette tape, had a resolution of 0.01 megapixels (10,000 pixels), and took 23 seconds to capture its first image in December 1975. The prototype camera was a technical exercise, not intended for production.

Analog electronic cameras

Handheld electronic cameras, in the sense of a device meant to be carried and used like a handheld film camera, appeared in 1981 with the demonstration of the Sony Mavica (Magnetic Video Camera). This is not to be confused with the later cameras by Sony that also bore the Mavica name. This was an analog camera, in that it recorded pixel signals continuously, as videotape machines did, without converting them to discrete levels; it recorded television-like signals to a 2 × 2 inch "video floppy". In essence it was a video movie camera that recorded single frames, 50 per disk in field mode and 25 per disk in frame mode. The image quality was considered equal to that of then-current televisions.

Analog cameras do not appear to have reached the market until 1986 with the Canon RC-701. Canon demonstrated a prototype of this model at the 1984 Summer Olympics, printing the images in the Yomiuri Shimbun, a Japanese newspaper. In the United States, the first publication to use these cameras for real reportage was USA Today, in its coverage of World Series baseball. Several factors held back the widespread adoption of analog cameras; the cost (upwards of $20,000), poor image quality compared to film, and the lack of quality affordable printers. Capturing and printing an image originally required access to equipment such as a frame grabber, which was beyond the reach of the average consumer. The "video floppy" disks later had several reader devices available for viewing on a screen, but were never standardized as a computer drive.

The early adopters tended to be in the news media, where the cost was negated by the utility and the ability to transmit images by telephone lines. The poor image quality was offset by the low resolution of newspaper graphics. This capability to transmit images without a satellite link was useful during the Tiananmen Square protests of 1989 and the first Gulf War in 1991.

US government agencies also took a strong interest in the still video concept, notably the US Navy for use as a real time air-to-sea surveillance system.

The first analog camera marketed to consumers may have been the Canon RC-250 Xapshot in 1988. A notable analog camera produced the same year was the Nikon QV-1000C, designed as a press camera and not offered for sale to general users, which sold only a few hundred units. It recorded images in greyscale, and the quality in newspaper print was equal to film cameras. In appearance it closely resembled a modern digital single-lens reflex camera. Images were stored on video floppy disks.

The arrival of true digital cameras

The first true digital camera that recorded images as a computerized file was likely the Fuji DS-1P of 1988, which recorded to a 16 MB internal memory card that used a battery to keep the data in memory. This camera was never marketed in the United States, and has not been confirmed to have shipped even in Japan.

The first commercially available digital camera was the 1990 Dycam Model 1; it also sold as the Logitech Fotoman. It used a CCD image sensor, stored pictures digitally, and connected directly to a PC for download.[6][7][8]

In 1991, Kodak brought to market the Kodak DCS-100, the beginning of a long line of professional SLR cameras by Kodak that were based in part on film bodies, often Nikons. It used a 1.3 megapixel sensor and was priced at $13,000.

The move to digital formats was helped by the formation of the first JPEG and MPEG standards in 1988, which allowed image and video files to be compressed for storage. The first consumer camera with a liquid crystal display on the back was the Casio QV-10 in 1995, and the first camera to use CompactFlash was the Kodak DC-25 in 1996.

The marketplace for consumer digital cameras was originally low resolution (either analog or digital) cameras built for utility. In 1997 the first megapixel cameras for consumers were marketed. The first camera that offered the ability to record video clips may have been the Ricoh RDC-1 in 1995.

1999 saw the introduction of the Nikon D1, a 2.74 megapixel camera that was the first digital SLR developed entirely by a major manufacturer, and at a cost of under $6,000 at introduction was affordable by professional photographers and high end consumers. This camera also used Nikon F-mount lenses, which meant film photographers could use many of the same lenses they already owned.

Also in 1999, Minolta introduced the RD-3000 D-SLR at 2.7 megapixels. This camera found many professional adherents. Limitations to the system included the need to use Vectis lenses which were designed for APS size film. The camera was sold with 5 lenses at various focal lengths and ranges (zoom). Minolta did not produce another D-SLR until September 2004 when they introduced the Alpha 7D (Alpha in Japan, Maxxum in North America, Dynax in the rest of the world) but using the Minolta A-mount system from its 35 mm line of cameras.

2003 saw the introduction of the Canon EOS 300D, also known as the Digital Rebel, a 6 megapixel camera and the first DSLR priced under $1,000, and marketed to consumers.

Image resolution

The resolution of a digital camera is often limited by the camera sensor (typically a CCD or CMOS sensor chip) that turns light into discrete signals, replacing the job of film in traditional photography. The sensor is made up of millions of "buckets" that essentially count the number of photons that strike the sensor. This means that the brighter the image at that point the larger of a value that is read for that pixel. Depending on the physical structure of the sensor a color filter array may be used which requires a demosaicing/interpolation algorithm. The number of resulting pixels in the image determines its "pixel count". For example, a 640x480 image would have 307,200 pixels, or approximately 307 kilopixels; a 3872x2592 image would have 10,036,224 pixels, or approximately 10 megapixels.

The pixel count alone is commonly presumed to indicate the resolution of a camera, but this is a misconception. There are several other factors that impact a sensor's resolution. Some of these factors include sensor size, lens quality, and the organization of the pixels (for example, a monochrome camera without a Bayer filter mosaic has a higher resolution than a typical color camera). Many digital compact cameras are criticized for having excessive pixels, in that the sensors can be so small that the resolution of the sensor is greater than the lens could possibly deliver.

Australian recommended retail price of Kodak digital cameras
Australian recommended retail price of Kodak digital cameras

As the technology has improved, costs have decreased dramatically. Measuring the "pixels per dollar" as a basic measure of value for a digital camera, there has been a continuous and steady increase in the number of pixels each dollar buys in a new camera consistent with the principles of Moore's Law. This predictability of camera prices was first presented in 1998 at the Australian PMA DIMA conference by Barry Hendy and since referred to as "Hendy's Law".[9]

Since only a few aspect ratios are commonly used (especially 4:3 and 3:2), the number of sensor sizes that are useful is limited. Furthermore, sensor manufacturers don't manufacture every possible sensor size but take incremental steps in sizes. For example, in 2007 the three largest sensors (in terms of pixel count) used by Canon are the 21.1, 16.6, and 12.8 megapixel CMOS sensors. The following is a table of sensors commercially used in digital cameras.

Television

Thursday, August 28, 2008 | | 0 comments |

Television is a widely used telecommunication medium for sending (broadcasting) and receiving moving images, either monochromatic ("black and white") or color, usually accompanied by sound. "Television" may also refer specifically to a television set, television programming or television transmission. The word is derived from mixed Latin and Greek roots, meaning "far sight": Greek tele (τλε), far, and Latin visio, sight (from video, vis- to see, or to view in the first person).

Commercially available since the late 1930s, the television set has become a common communications receiver in homes, businesses and institutions, particularly as a source of entertainment and news. Since the 1970s, recordings on video cassettes, and later, digital media such as DVDs, have resulted in the television frequently being used for viewing recorded as well as broadcast material.

A standard television set comprises multiple internal electronic circuits, including those for tuning and decoding broadcast signals. A display device which lacks these internal circuits is therefore properly called a monitor, rather than a television. A television set may be designed to handle other than traditional broadcast or recorded signals and formats, such as closed-circuit television (CCTV), digital television (DTV) and high-definition television (HDTV).


History

In its early stages of development, television included only those devices employing a combination of optical, mechanical and electronic technologies to capture, transmit and display a visual image. By the late 1920s, however, those employing only optical and electronic technologies were being explored. All modern television systems rely on the latter, however the knowledge gained from the work on mechanical-dependent systems was crucial in the development of fully electronic television.

In 1884 Paul Gottlieb Nipkow, a 20-year old university student in Germany patented the first electromechanical television system which employed a scanning disk, a spinning disk with a series of holes spiraling toward the center, for "rasterization", the process of converting a visual image into a stream of electrical pulses. The holes were spaced at equal angular intervals such that in a single rotation the disk would allow light to pass through each hole and onto a light-sensitive selenium sensor which produced the electrical pulses. As an image was focused on the rotating disk, each hole captured a horizontal "slice" of the whole image.

Nipkow's design would not be practical until advances in amplifier tube technology became available in 1907. Even then the device was only useful for transmitting still halftone images - those represented by equally spaced dots of varying size - over telegraph or telephone lines. Later designs would use a rotating mirror-drum scanner to capture the image and a cathode ray tube (CRT) as a display device, but moving images were still not possible, due to the poor sensitivity of the selenium sensors.

Scottish inventor John Logie Baird demonstrated the transmission of moving silhouette images in London in 1925, and of moving, monochromatic images in 1926. Baird's scanning disk produced an image of 30 lines resolution, barely enough to discern a human face, from a double spiral of lenses.

By 1927, Russian inventor Léon Theremin developed a mirror drum-based television system which used interlacing to achieve an image resolution of 100 lines.

Also in 1927, Herbert E. Ives of Bell Labs transmitted moving images from a 50-aperture disk producing 16 frames per minute over a cable from Washington, DC to New York City, and via radio from Whippany, New Jersey. Ives used viewing screens as large as 24 by 30 inches (60 by 75 centimeter). His subjects included Secretary of Commerce Herbert Hoover.

Meaning Of Technology

| | 0 comments |

Technology is a broad concept that deals with a species' usage and knowledge of tools and crafts, and how it affects a species' ability to control and adapt to its environment. In human society, it is a consequence of science and engineering, although several technological advances predate the two concepts. Technology is a term with origins in the Greek "technologia", "τεχνολογία" — "techne", "τέχνη" ("craft") and "logia", "λογία" ("saying").[1] However, a strict definition is elusive; "technology" can refer to material objects of use to humanity, such as machines, hardware or utensils, but can also encompass broader themes, including systems, methods of organization, and techniques. The term can either be applied generally or to specific areas: examples include "construction technology", "medical technology", or "state-of-the-art technology".

The human race's use of technology began with the conversion of natural resources into simple tools. The prehistorical discovery of the ability to control fire increased the available sources of food and the invention of the wheel helped humans in travelling in and controlling their environment. Recent technological developments, including the printing press, the telephone, and the Internet, have lessened physical barriers to communication and allowed humans to interact on a global scale. However, not all technology has been used for peaceful purposes; the development of weapons of ever-increasing destructive power has progressed throughout history, from clubs to nuclear weapons.

Technology has affected society and its surroundings in a number of ways. In many societies, technology has helped develop more advanced economies (including today's global economy) and has allowed the rise of a leisure class. Many technological processes produce unwanted by-products, known as pollution, and deplete natural resources, to the detriment of the Earth and its environment. Various implementations of technology influence the values of a society and new technology often raises new ethical questions. Examples include the rise of the notion of efficiency in terms of human productivity, a term originally applied only to machines, and the challenge of traditional norms.

Philosophical debates have arisen over the present and future use of technology in society, with disagreements over whether technology improves the human condition or worsens it. Neo-Luddism, anarcho-primitivism, and similar movements criticise the pervasiveness of technology in the modern world, claiming that it harms the environment and alienates people; proponents of ideologies such as transhumanism and techno-progressivism view continued technological progress as beneficial to society and the human condition. Indeed, until recently, it was believed that the development of technology was restricted only to human beings, but recent scientific studies indicate that other primates and certain dolphin communities have developed simple tools and learned to pass their knowledge to other generations.


Definition and usage

In general technology is the relationship that society has with its tools and crafts, and to what extent society can control its environment. The Merriam-Webster dictionary offers a definition of the term: "the practical application of knowledge especially in a particular area" and "a capability given by the practical application of knowledge".[1] Ursula Franklin, in her 1989 "Real World of Technology" lecture, gave another definition of the concept; it is "practice, the way we do things around here".[2] The term is often used to imply a specific field of technology, or to refer to high technology or just consumer electronics, rather than technology as a whole.[3] Bernard Stiegler, in Technics and Time, 1, defines technology in two ways: as "the pursuit of life by means other than life", and as "organized inorganic matter."[4]

Technology can be most broadly defined as the entities, both material and immaterial, created by the application of mental and physical effort in order to achieve some value. In this usage, technology refers to tools and machines that may be used to solve real-world problems. It is a far-reaching term that may include simple tools, such as a crowbar or wooden spoon, or more complex machines, such as a space station or particle accelerator. Tools and machines need not be material; virtual technology, such as computer software and business methods, fall under this definition of technology.[5]

The word "technology" can also be used to refer to a collection of techniques. In this context, it is the current state of humanity's knowledge of how to combine resources to produce desired products, to solve problems, fulfill needs, or satisfy wants; it includes technical methods, skills, processes, techniques, tools and raw materials. When combined with another term, such as "medical technology" or "space technology", it refers to the state of the respective field's knowledge and tools. "State-of-the-art technology" refers to the high technology available to humanity in any field.

Technology can be viewed as an activity that forms or changes culture.[6] Additionally, technology is the application of math, science, and the arts for the benefit of life as it is known. A modern example is the rise of communication technology, which has lessened barriers to human interaction and, as a result, has helped spawn new subcultures; the rise of cyberculture has, at its basis, the development of the Internet and the computer.[7] Not all technology enhances culture in a creative way; technology can also help facilitate political oppression and war via tools such as guns. As a cultural activity, technology predates both science and engineering, each of which formalize some aspects of technological endeavor.

Science, engineering and technology

The distinction between science, engineering and technology is not always clear. Science is the reasoned investigation or study of phenomena, aimed at discovering enduring principles among elements of the phenomenal world by employing formal techniques such as the scientific method.[8] Technologies are not usually exclusively products of science, because they have to satisfy requirements such as utility, usability and safety.

Engineering is the goal-oriented process of designing and making tools and systems to exploit natural phenomena for practical human means, often (but not always) using results and techniques from science. The development of technology may draw upon many fields of knowledge, including scientific, engineering, mathematical, linguistic, and historical knowledge, to achieve some practical result.

Technology is often a consequence of science and engineering — although technology as a human activity precedes the two fields. For example, science might study the flow of electrons in electrical conductors, by using already-existing tools and knowledge. This new-found knowledge may then be used by engineers to create new tools and machines, such as semiconductors, computers, and other forms of advanced technology. In this sense, scientists and engineers may both be considered technologists; the three fields are often considered as one for the purposes of research and reference.[9]

Role in human history

Paleolithic (2.5 million – 10,000 BC)

A primitive chopper
A primitive chopper

The use of tools by early humans was partly a process of discovery, partly of evolution. Early humans evolved from a race of foraging hominids which were already bipedal,[10] with a brain mass approximately one third that of modern humans.[11] Tool use remained relatively unchanged for most of early human history, but approximately 50,000 years ago, a complex set of behaviors and tool use emerged, believed by many archaeologists to be connected to the emergence of fully-modern language.[12]

Stone tools

Hand axes from the Acheulian period
Hand axes from the Acheulian period
A Clovis point, made via pressure flaking
A Clovis point, made via pressure flaking

Human ancestors have been using stone and other tools since long before the emergence of Homo sapiens approximately 200,000 years ago.[13] The earliest methods of stone tool making, known as the Oldowan "industry", date back to at least 2.3 million years ago,[14] with the earliest direct evidence of tool usage found in Ethiopia within the Great Rift Valley, dating back to 2.5 million years ago.[15] This era of stone tool use is called the Paleolithic, or "Old stone age", and spans all of human history up to the development of agriculture approximately 12,000 years ago.

To make a stone tool, a "core" of hard stone with specific flaking properties (such as flint) was struck with a hammerstone. This flaking produced a sharp edge on the core stone as well as on the flakes, either of which could be used as tools, primarily in the form of choppers or scrapers.[16] These tools greatly aided the early humans in their hunter-gatherer lifestyle to perform a variety of tasks including butchering carcasses (and breaking bones to get at the marrow); chopping wood; cracking open nuts; skinning an animal for its hide; and even forming other tools out of softer materials such as bone and wood.[17]

The earliest stone tools were crude, being little more than a fractured rock. In the Acheulian era, beginning approximately 1.65 million years ago, methods of working these stone into specific shapes, such as hand axes emerged. The Middle Paleolithic, approximately 300,000 years ago, saw the introduction of the prepared-core technique, where multiple blades could be rapidly formed from a single core stone.[16] The Upper Paleolithic, beginning approximately 40,000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely.[18]

Fire

The discovery and utilization of fire, a simple energy source with many profound uses, was a turning point in the technological evolution of humankind.[19] The exact date of its discovery is not known; evidence of burnt animal bones at the Cradle of Humankind suggests that the domestication of fire occurred before 1,000,000 BCE;[20] scholarly consensus indicates that Homo erectus had controlled fire by between 500,000 BCE and 400,000 BCE.[21][22] Fire, fueled with wood and charcoal, allowed early humans to cook their food to increase its digestibility, improving its nutrient value and broadening the number of foods that could be eaten.

Clothing and shelter

Other technological advances made during the Paleolithic era were clothing and shelter; the adoption of both technologies cannot be dated exactly, but they were a key to humanity's progress. As the Paleolithic era progressed, dwellings became more sophisticated and more elaborate; as early as 380,000 BCE, humans were constructing temporary wood huts.[24][25] Clothing, adapted from the fur and hides of hunted animals, helped humanity expand into colder regions; humans began to migrate out of Africa by 200,000 BCE and into other continents, such as Eurasia.[26]

Humans began to work bones, antler, and hides, as evidenced by burins and racloirs produced during this period.[citation needed]

Neolithic through Classical Antiquity (10,000BCE – 300CE)

An array of Neolithic artifacts, including bracelets, axe heads, chisels, and polishing tools.
An array of Neolithic artifacts, including bracelets, axe heads, chisels, and polishing tools.

Man's technological ascent began in earnest in what is known as the Neolithic period ("New stone age"). The invention of polished stone axes was a major advance because it allowed forest clearance on a large scale to create farms. The discovery of agriculture allowed for the feeding of larger populations, and the transition to a sedentist lifestyle increased the number of children that could be simultaneously raised, as young children no longer needed to be carried, as was the case with the nomadic lifestyle. Additionally, children could contribute labor to the raising of crops more readily than they could to the hunter-gatherer lifestyle.[27][28]

With this increase in population and availability of labor came an increase in labor specialization.[29] What triggered the progression from early Neolithic villages to the first cities, such as Uruk, and the first civilizations, such as Sumer, is not specifically known; however, the emergence of increasingly hierarchical social structures, the specialization of labor, trade and war amongst adjacent cultures, and the need for collective action to overcome environmental challenges, such as the building of dikes and reservoirs, are all thought to have played a role.[30]

Metal tools

Continuing improvements led to the furnace and bellows and provided the ability to smelt and forge native metals (naturally occurring in relatively pure form).[31] Gold, copper, silver, and lead, were such early metals. The advantages of copper tools over stone, bone, and wooden tools were quickly apparent to early humans, and native copper was probably used from near the beginning of Neolithic times (about 8000 BCE).[32] Native copper does not naturally occur in large amounts, but copper ores are quite common and some of them produce metal easily when burned in wood or charcoal fires. Eventually, the working of metals led to the discovery of alloys such as bronze and brass (about 4000 BCE). The first uses of iron alloys such as steel dates to around 1400 BCE.

Energy and Transport

Meanwhile, humans were learning to harness other forms of energy. The earliest known use of wind power is the sailboat.[citation needed] The earliest record of a ship under sail is shown on an Egyptian pot dating back to 3200 BCE.[citation needed] From prehistoric times, Egyptians probably used "the power of the Nile" annual floods to irrigate their lands, gradually learning to regulate much of it through purposely-built irrigation channels and 'catch' basins. Similarly, the early peoples of Mesopotamia, the Sumerians, learned to use the Tigris and Euphrates rivers for much the same purposes. But more extensive use of wind and water (and even human) power required another invention.

The wheel was invented in circa 4000 BCE.
The wheel was invented in circa 4000 BCE.

According to archaeologists, the wheel was invented around 4000 B.C. The wheel was likely independently invented in Mesopotamia (in present-day Iraq) as well. Estimates on when this may have occurred range from 5500 to 3000 B.C., with most experts putting it closer to 4000 B.C. The oldest artifacts with drawings that depict wheeled carts date from about 3000 B.C.; however, the wheel may have been in use for millennia before these drawings were made. There is also evidence from the same period of time that wheels were used for the production of pottery. (Note that the original potter's wheel was probably not a wheel, but rather an irregularly shaped slab of flat wood with a small hollowed or pierced area near the center and mounted on a peg driven into the earth. It would have been rotated by repeated tugs by the potter or his assistant.) More recently, the oldest-known wooden wheel in the world was found in the Ljubljana marshes of Slovenia.[33]

The invention of the wheel revolutionized activities as disparate as transportation, war, and the production of pottery (for which it may have been first used). It didn't take long to discover that wheeled wagons could be used to carry heavy loads and fast (rotary) potters' wheels enabled early mass production of pottery. But it was the use of the wheel as a transformer of energy (through water wheels, windmills, and even treadmills) that revolutionized the application of nonhuman power sources.

Modern history (0CE —)

Tools include both simple machines (such as the lever, the screw, and the pulley), and more complex machines (such as the clock, the engine, the electric generator and the electric motor, the computer, radio, and the Space Station, among many others). As tools increase in complexity, so does the type of knowledge needed to support them. Complex modern machines require libraries of written technical manuals of collected information that has continually increased and improved — their designers, builders, maintainers, and users often require the mastery of decades of sophisticated general and specific training. Moreover, these tools have become so complex that a comprehensive infrastructure of technical knowledge-based lesser tools, processes and practices (complex tools in themselves) exist to support them, including engineering, medicine, and computer science. Complex manufacturing and construction techniques and organizations are needed to construct and maintain them. Entire industries have arisen to support and develop succeeding generations of increasingly more complex tools. The relationship of technology with society ( culture) is generally characterized as synergistic, symbiotic, co-dependent, co-influential, and co-producing, i.e. technology and society depend heavily one upon the other (technology upon culture, and culture upon technology). It is also generally believed that this synergistic relationship first occurred at the dawn of humankind with the invention of simple tools, and continues with modern technologies today. Today and throughout history, technology influences and is influenced by such societal issues/factors as economics, values, ethics, institutions, groups, the environment, government, among others. The discipline studying the impacts of science, technology, and society and vice versa is called Science and technology in society.

Technology and philosophy

Technicism

Generally, technicism is an over reliance or overconfidence in technology as a benefactor of society.

Taken to extreme, some argue that technicism is the belief that humanity will ultimately be able to control the entirety of existence using technology. In other words, human beings will someday be able to master all problems and possibly even control the future using technology. Some, such as Monsma,[34] connect these ideas to the abdication of religion as a higher moral authority.

More commonly, technicism is a criticism of the commonly held belief that newer, more recently-developed technology is "better." For example, more recently-developed computers are faster than older computers, and more recently-developed cars have greater gas efficiency and more features than older cars. Because current technologies are generally accepted as good, future technological developments are not considered circumspectly, resulting in what seems to be a blind acceptance of technological development.