Skinpress Rss

Sunday, December 16, 2012

Crystalline Silicon -- The Ideal Crystal

0

A large single crystal of pure silicon forms the starting point for the monocrystalline silicon solar cell –  the most efficient type in common use. As we shall see, the simple and elegant structure of such crystals makes it comparatively easy to explain the basic semiconductor physics and operation of PV cells. We are talking here of silicon refined to very high purity, similar to that used by the electronics industry to make semiconductor devices (diodes, transistors, and integrated circuits including computer chips). Its purity is typically 99.99999%. This contrasts with the far less pure metallurgical grade silicon, produced by reducing quartzite in electric arc furnaces, that is used to make special steels and alloys.


The   Czochralski (CZ)  method of growing silicon crystals is quite easy to visualize. Chunks of pure silicon with no particular crystallographic structure are melted at 1414°C in a graphite crucible. A small seed of silicon is then brought into contact with the surface of the melt to start crystallization. Molten silicon solidifi es at the interface between seed and melt as the seed is slowly withdrawn. A large ingot begins to grow both vertically and laterally with the atoms tending to arrange themselves in a perfect crystal lattice.

Unfortunately, this classic method of producing crystals has a number of disadvantages. Crystal growth is slow and energy intensive, leading to high production costs. Impurities may be introduced due to interaction between the melt and the crucible. And in the case of PV the aim is of course to
produce thin solar cell wafers rather than large ingots, so wire saws are used to cut the ingot into thin slices, a time consuming process that involves discarding valuable material. For these reasons the PV industry has spent a lot of R & D effort investigating alternatives, including pulling crystals in thin sheet or ribbon form, and some of these are now used in volume production. Whatever method is employed, the desired result is pure crystalline silicon with a simple and consistent atomic structure.

The element silicon has atomic number 14, meaning that each atom has 14 negatively charged electrons orbiting a positively charged nucleus, rather like a miniature solar system. Ten of the electrons are tightly bound to the nucleus and play no further part in the PV story, but the other four   valence electrons  are crucial and explain why each atom aligns itself with four immediate neighbours in the crystal. This is illustrated by Figure  2.4 (a). The  ‘ glue ’  bonding two atoms together is two shared valence electrons, one from each atom. Since each atom has four valence electrons that are not tightly bound to its nucleus, a perfect lattice structure is formed when each atom forms bonds with its four nearest neighbours (which are actually at the vertices of a three - dimensional tetrahedron, but shown here in two dimensions for simplicity). The structure has profound implications for the fundamental physics of silicon solar cells.

Silicon in its pure state is referred to as an intrinsic  semiconductor. It is neither an insulator like glass, nor a conductor like copper, but something in between. At low temperatures its valence electrons are tightly constrained by bonds, as in part (a) of the figure, and it acts as an insulator. But bonds can be broken if sufficiently jolted by an external source of energy such as heat or light, creating electrons that are free to migrate through the lattice. If we shine light on the crystal the tiny packets, or quanta, of light energy can produce broken bonds if sufficiently energetic. The silicon becomes a conductor, and the more bonds are broken the greater its conductivity.

Figure  2.4 (b) shows an electron ε1 that has broken free to wander through the lattice. It leaves behind a broken bond, indicated by a dotted line. The free electron carries a negative charge and, since the crystal remains electrically neutral, the broken bond must be left with a positive charge. In effect it is a positively charged particle, known as a   hole.  We see that breaking a bond has given rise to a pair of equal and opposite charged  ‘ particles ’ , an electron and a hole. Not surprisingly they are referred to as an electron-hole pair.

At first sight the hole might appear to be an immovable object fixed in the crystal lattice. But now consider the electron ε2  shown in the figure, which has broken free from somewhere else in the lattice. It is quite likely to jump into the vacant spot left by the first electron, restoring the original broken bond, but leaving a new broken bond behind. In this way a broken bond, or hole, can also move through the crystal, but as a positive charge. It is analogous to a bubble moving in a liquid; as the liquid moves one way the bubble is seen travelling in the opposite direction.

We see that the electrical properties of intrinsic silicon depend on the number of mobile electron–hole pairs in the crystal lattice. At low temperatures, in the dark, it is effectively an insulator. At higher temperatures, or in sunlight, it becomes a conductor. If we attach two contacts and apply an external voltage using a battery, current will flow – due to free electrons moving one way, holes the other. We have now reached an important stage in understanding how a silicon wafer can be turned into a practical solar cell.

Yet there is a vital missing link: remove the external voltage and the electrons and holes wander randomly in the crystal lattice with no preferred directions. There is no tendency for them to produce current flow in an external circuit. A pure silicon wafer, even in strong sunlight, cannot   generate  electricity and become a solar cell. What is needed is a mechanism to propel electrons and holes in opposite directions in the crystal lattice, forcing current through an external circuit and producing useful power. This mechanism is provided by one of the great inventions of the 20th century, the semiconductor   p  –   n  junction. 

Saturday, December 15, 2012

Solar Cell

0

Silicon solar cells have been the workhorse of the PV industry for many years and currently account for well over 80% of world production. Modules based on these cells have a long history of rugged reliability, with guarantees lasting 20 or 25 years that are exceptional among manufactured products.

Although cells made from other materials are constantly being developed and some are in commercial production, it will be hard to dis-lodge silicon from its pedestal. The underlying technology is that of semiconductor electronics: a silicon solar cell is a special form of semiconductor diode.

Fortunately, silicon in the form of silicon dioxide (quartz sand) is an extremely common component of the Earth ’ s crust and is essentially non - toxic. There is a further good reason for focussing strongly on silicon cells in this chapter: in its   crystalline  form silicon has a simple lattice structure, making it comparatively easy to describe and appreciate the underlying science.

There are two major types of crystalline silicon solar cell in current high volume production:
  • Monocrystalline.The most efficient type, made from a very thin slice, or wafer, of a large single crystal obtained from pure molten silicon. The circular wafers, often 5 or 6 inches (15cm) in diameter, have a smooth silvery appearance and are normally trimmed to a pseudo - square or hexagonal shape so that more can be fitted into a module. Fine contact fingers and busbars are used to conduct the electric current away from the cells which have a highly ordered crystal structure with uniform, predictable, properties. However, they require careful and expensive manufacturing processes, including  ‘ doping ’  with small amounts of other elements to produce the required electrical characteristics. Typical commer-cial module effi  ciencies fall in the range 12 – 16%. The module surface area required is about 7  m2 /kWp .
  • Multicrystalline,      also called   polycrystalline.  This type of cell is also produced from pure molten silicon, but using a casting process. As the silicon cools it sets as a large irregular multicrystal which is then cut into thin square or rectangular slices to make individual cells. Their crystal structure, being random, is less ideal than with monocrystalline material and gives slightly lower cell eficiencies, but this disadvantage is offset by lower wafer costs. Cells and modules of this type often look distinctly blue, with a scaly, shimmering appearance. Multicrystalline modules exhibit typical eficiencies in the range 11 – 15% and have overtaken their monocrystalline cousins in volume production over recent years. The module surface area is about 8 m2 /kWp .     
You have probably already gathered that the eficiency  of any solar cell or module, the percentage of solar radiation it converts into electricity, is considered one of its most important properties. The higher the eficiency, the smaller the surface area for a given power rating. This is important when space is limited, and also because some of the additional costs of PV systems  –  especially mounting and fixing modules  –  are area related. Crystalline silicon cells, when operated in strong sunlight, have the highest efficiencies of all cells commonly used in terrestrial PV systems, plus the promise of modest increases as the years go by due to improvements in design and manufacture. But it is important to realize that other types of cell often perform better in weak or diffuse light, a matter we shall return to in later sections.

Research laboratory cells achieve considerably higher efficiencies than mass - produced cells. This reflects the ongoing R & D effort that is continually improving cell design and leading to better commercial products. In some applications where space is limited and effi ciency is paramount  –  for example, the famous solar car races held in Australia  –  high - quality cells made in small batches are often individually tested for effi  ciency before assembly.

Module efficiencies are slightly lower than cell efficiencies because a module’s surface area cannot be completely filled with cells and the frame also takes up space. It is always important to distinguish carefully between cell and module efficiency.

There is one further type of silicon solar cell in common use:
  • Amorphous. Most people have met small amorphous silicon (a - Si) cells in solar - powered consumer products such as watches and calculators that were first introduced in the 1980s. Amorphous cells are cheaper than crystalline silicon cells, but have much lower efficiencies, typically 6 – 8%. Nowadays, large modules are available and suitable for applications where space is not at a premium, for example on building facades. The surface area required is about
    16 m2/kWp .
We focus initially on crystalline silicon solar cells for two main reasons: their comparatively simple crystal structure and theoretical background; and their present dominant position in the terrestrial PV market. Their wafer technology has been around for a long time and is often referred to as  ‘first generation’ ; they are the cells you are most likely to see on houses, factories, and commercial buildings. 

However, it is important to realize that many other semiconductor materials can be used to make solar cells. Most come under the heading of   thin film  –  somewhat confusing because a - Si is also commonly given this title  –  and involve depositing very thin layers of semiconductor on a variety of substrates. Thin  film products are generally regarded as the ultimate goal for terrestrial PV since they use very small amounts of semiconductor material and large - scale continuous production processes without any need to cut and mount individual crystalline wafers. Thin  film modules based on the compound semiconductors   copper indium diselenide (CIS) and cadmium telluride (CdTe)  are in commercial production. Often referred to as ‘second generation’, they currently have efficiencies lower than those of crystalline silicon, but they represent a highly significant advance into thin film products. We will discuss them, and several types of specialized cells and modules later.

Saturday, December 8, 2012

Electrostatics

0

If a conductor has a deficit of electrons, it will exhibit a net positive charge. If, on the other hand, it has a surplus of electrons, it will exhibit a net negative charge. An imbalance in charge can be produced by friction (removing or depositing electrons using materials such as silk and fur, respectively) or induction (by attracting or repelling electrons using a second body which is, respectively, positively or negatively charged).

Force between charges

Coulomb’s Law states that, if charged bodies exist at two points, the force of attraction (if the charges are of opposite polarity) or repulsion (if the charges have the same polarity) will be proportional to the product of the magnitude of the charges divided by the square of their distance apart. Thus:
where Q1 and Q2 are the charges present at the two points (in Coulombs), r the distance separating the two points (in metres), F is the force (in Newtons), and k is a constant depending upon the medium in which the charges exist. In vacuum or ‘free space’, 
 




Combining the two previous equations gives:
Electric fields

The force exerted on a charged particle is a manifestation of the existence of an electric field. The electric field defines the direction and magnitude of a force on a charged object. The field  itself is invisible to the human eye but can be drawn by constructing lines which indicate the motion of a free positive charge within the field; the number of field lines in a particular region being used to indicate the relative strength of the field at the point in question.





Figures 1.7 and 1.8 show the electric fields between charges of the same and opposite polarity while Fig. 1.9 shows the field which exists between two charged parallel plates.








Electric field strength

The strength of an electric field (5) is proportional to the applied potential difference and inversely  proportional to the distance between the two conductors. The electric field strength is given by:   

E = V / d

where E is the electric field strength (V/m), V is the applied potential difference (V) and d is the distance (m).

Example

Two parallel conductors are separated by a distance of 25 mm. Determine the electric field strength if they are fed from a 600 V d.c. supply.

Solution

The electric field strength will be given by:
 


Permittivity

The amount of charge produced on the two plates shown in Fig. 1.9 for a given applied voltage will depend not only on the physical dimensions but also on the insulating dielectric material that appears between the plates. Such materials need to have a very high value of resistivity (they must not conduct charge) coupled with an ability to withstand high voltages without breaking down.

A more practical arrangement is shown in Fig. 1.10. In this arrangement the ratio of charge, Q, to potential difference, V, is given by the relationship:
where A is the surface area of the plates (in m), d is the separation  (in m), and  E is a constant for the dielectric material known as the  absolute permittivity of the material (sometimes also referred to as the dielectric constant).

The absolute permittivity of a dielectric material is the product of the permittivity of free space ( E0) and the  relative permittivity ( Er) of the material. Thus:
The dielectric strength  of an insulating dielectric is the maximum electric field strength that can safely be applied to it before breakdown (conduction) occurs. Table 1  shows values of relative permittivity and dielectric strength for some common dielectric materials.

















Electromagnetism

When a current flows through a conductor a magnetic field is produced in the vicinity of the conductor. The magnetic field is invisible but its presence can be detected using a compass needle (which will deflect from its normal North South position). If two current-carrying conductors are placed in the vicinity of one another, the fields will interact with one another and the conductors will experience a force of attraction or repulsion (depending upon the relative direction of the two currents). 

Force between two current-carrying conductors

The mutual force which exists between two parallel current-carrying conductors will be proportional to the product of the currents in the two conductors and the length of the conductors but inversely proportional to their separation. Thus:

where  I1 and  I2 are the currents in the two conductors (in Amps), l is the parallel length of the conductors (in metres), d is the distance separating the two conductors (in metres),  F is the force (in Newtons), and  k is a constant depending upon the medium in which the charges exist.  In vacuum or ‘free space’,



Combining the two previous equations gives:

Magnetic fields

The field surrounding a straight current-carrying conductor is shown in Fig. 1.11. The magnetic field defines the direction of motion of a free North pole within the field. In the case of Fig. 1.11, the lines of flux are concentric and the direction of the field determined by the direction of current flow) is given by the right-hand rule.


Magnetic field strength

The strength of a magnetic field is a measure of the density of the flux at any particular point. In the case of Fig. 1.11, the field strength will be proportional to the applied current and inversely proportional to the perpendicular distance from the conductor. Thus:




where B is the magnetic flux density (in Tesla), I is the current (in amperes), d is the distance from the conductor (in metres), and k is a constant.

Assuming that the medium is vacuum or ‘free space’, the density of the magnetic flux will be given by:



The flux density is also equal to the total flux divided by the area of the field. Thus:




In order to increase the strength of the field, a conductor may be shaped into a loop (Fig. 1.12) or coiled to form a solenoid (Fig. 1.13). Note, in the latter case, how the field pattern is exactly the same as that which surrounds a bar magnet. We will see


















 Example

Determine the flux density produced at a distance of 50 mm from a straight wire carrying a current of 20 A.












Example

A flux density of 2.5 mT is developed in free space over an area of 20 cm2. Determine the total flux.
Solution






Thursday, December 6, 2012

Ohm’s Law

0

Provided that temperature does not vary, the ratio of p.d. across the ends of a conductor to the current flowing in the conductor is a constant. This relationship is known as Ohm’s Law and it leads to the relationship:
V / I = a constant = R


where V is the potential difference (or voltage drop) in Volts (V), I is the current in Amperes (A), and R is the resistance in Ohms (see Fig. below).  The formula may be arranged to make V, I or R the subject, as follows:
V = I × R , I = V / R and  R = V / I 


The triangle shown in Fig. A should help you remember these three important relationships. However, it’s worth noting that, when performing calculations of currents, voltages and resistances in practical circuits it is seldom necessary to work with an accuracy of better than ±1% simply because component tolerances are usually greater than this. Furthermore, in calculations involving Ohm’s Law, it can sometimes be convenient to work in units of k< and mA (or M< and µA) in which case potential differences will be expressed directly in V.
Example 

A 12 Ohm resistor is connected to a 6V battery. What current will flow in the resistor?

Solution

Here we must use  I = V / R (where V = 6 V and  R = 12 Ohm):

I = V / R = 6 V / 12 Ohm = 0.5 A (or 500 mA)

Hence a current of 500 mA will flow in the resistor.

Example

A current of 100 mA flows in a 56 Ohm resistor. What voltage drop (potential difference) will be developed across the resistor?

Solution

Here we must use  V = I × R and ensure that we work in units of Volts (V), Amperes (A) and Ohms.

V = I × R = 0.1 A × 56 Ohm = 5.6 V

(Note that 100 mA is the same as 0.1 A.) This calculation shows that a p.d. of 5.6 V will be developed across the resistor.

Example

A voltage drop of 15 V appears across a resistor in which a current of 1 mA flows. What is the value of the resistance?

Solution

R = V / I = 15 V / 0.001 A = 15,000 < = 15 kOhm

Note that it is often more convenient to work in units of mA and V which will produce an answer directly in kOhm, i.e.

R = V / I = 15 V/ l mA = 15 kOhm

Saturday, December 1, 2012

Basic Real-Time Concepts -- CPU Utilization

0

CPU Utilization

The final and most important term to be defined is a critical measure of real-time system performance. Because in the von Neumann paradigm, the CPU continues to fetch, decode, and execute instructions as long as power is applied, the CPU will execute either no-ops or instructions or instructions that are not related to the satisfaction of a deadline (for example, noncritical “housekeeping”). The measure of the time spent doing idle processing, in a sense, indicates how much real-time processing is occurring.
Definition: The (CPU) utilization or time-loading factor, U, is a measure of the percentage of non idle processing.
A system is said to be time-overloaded if U> 100%. Systems that are too highly utilized are undesirable because changes or additions cannot be made to the system without risk of time overloading. Systems that are not sufficiently utilized are not necessarily good, because this implies that the system was over engineered and that costs can be reduced with less expensive hardware.

While a utilization of 50% is common for new products, 80% might be acceptable for systems that do not expect growth. However, 70% as a target for U is one of the most celebrated and potentially useful results in the theory of real-time systems where tasks are periodic and independent – a result that will be examined later. Table below gives a summary of certain CPU utilizations and typical situations in which they are associated.
U is calculated by summing the contribution of utilization factors for each (periodic or aperiodic) task. Suppose a system has n ≥ 1 periodic tasks, each with an execution period of pi , and hence, execution frequency, fi = 1/pi .If task i is known to have (or has been estimated to have) a maximum (worst case) execution time of ei , then the utilization factor, ui , for task ei is

ui = ei /pi
Then the overall system utilization is 
 

Note that the deadline for periodic task i, di , is typically the next cycle or time period, and is a critical design factor that is constrained by ei . The determination of the ei either prior to or after the code has been written can be extremely difficult, and in many cases, impossible, in which case estimation must be used. For aperiodic and sporadic tasks ui is calculated by assuming some worst-case execution period, usually the worst-case delay between event occurrences. Such approximations can inflate the utilization factor unnecessarily or can lead to overconfidence because of the tendency to “not worry” about its excessive contribution. The danger is to discover later that a higher frequency of occurrence than budgeted has led to a time-overload and system failure.

The utilization factor differs from CPU throughput, which is a measure of the number of macroinstructions per second that can be processed based on some predetermined instruction mix. This type of measurement is typically used to compare CPU horsepower for a particular application.

Basic Real-Time Concepts -- Events and Determinism

0

Events and Determinism

In software systems, a change in state results in a change in the flow-of-control of the computer program. Consider the flowchart below.


The decision block represented by the diamond suggests that the stream of program instructions, can take one of two paths, depending on the response in question. if-then, go to, and case statements in any language represent a possible change in flow-of-control. Invocation of procedures in C and Ada represent changes in flow-of-control. In object-oriented languages, instantiation of an object or the invocation of a method causes the change in sequential flow-of-control. In general, consider the following definition.

Definition: Any occurrence that causes the program counter to change non sequentially is considered a change of flow-of-control, and thus an event.
 
In scheduling theory, the “release” time of a “job” is similar to an event.
 
Definition: The release time is the time at which an instance of a scheduled task is ready to run, and is generally associated with an interrupt.
 
Events are slightly different from jobs in that events can be caused by interrupts as well as conditional and unconditional branches.
 
Synchronous and Asynchronous Events
 
An event can be either synchronous or asynchronous. Synchronous events are those that occur at predictable times in the flow-of-control, such as that represented by the decision box in the flowchart  The change in flow-of-control, represented by a conditional branch instruction, or by the occurrence of an internal trap interrupt, can be anticipated (although it may not always occur).
 
Asynchronous events occur at unpredictable points in the flow-of-control and are usually caused by external sources. A clock that pulses “regularly” at 5 milliseconds is not a synchronous event. While it represents a periodic event, even if the clock were able to tick at a perfect 5 milliseconds without drift (which it cannot for physical reasons), the point where the tick occurs with the flow- of-control is subject to many factors. These factors include the time at which the clock starts relative to the program and propagation delays in the computer system itself. An engineer can never count on a clock ticking exactly at the rate specified, and so a clock-driven event must be treated as asynchronous.
 
Events that do not occur at regular intervals (or periods) are called aperiodic. Aperiodic events that tend to occur very infrequently are called sporadic.
 
Table below characterizes a sampling of events.
 For example, an interrupt generated by a periodic external clock represents a periodic but asynchronous event. A periodic but synchronous event is one represented by a sequence of invocation of tasks in a repeated, circular fashion, otherwise known as cyclic code. A typical conditional or unconditional branching instruction that is not part of a code block and that runs repeatedly at a regular rate represents a synchronous but aperiodic event. A branch instruction that happens infrequently, say, on the detection of some exceptional condition, is both sporadic and synchronous. Finally, interrupts that are generated irregularly (randomly) by an external device are classified as either asynchronous aperiodic or sporadic, depending on whether the interrupt is generated frequently or not with respect to the system clock.
 
 Determinism
 
In every system, and particularly in an embedded real time system, maintaining control is extremely important. For any physical system certain states exist under which the system is considered to be out of control; the software controlling such a system must therefore avoid these states. For example, in certain aircraft guidance systems, rapid rotation through a 180◦ pitch angle can cause loss of gyroscopic control. The software must be able to anticipate and avert all such scenarios.

Another characteristic of a software-controlled system is that the CPU continues to fetch and execute instructions from the program area of memory, rather than from data or other unwanted memory regions. The latter scenario can occur in poorly tested systems and is a catastrophe from which there is almost no hope of recovery.

Software control of any real-time system and associated hardware is maintained when the next state of the system, given the current state and a set of inputs, is predictable. In other words, the goal is to anticipate how a system will behave in all possible circumstances.
 
Definition: A system is deterministic if, for each possible state and each set of inputs, a unique set of outputs and next state of the system can be determined.
 
Event determinism means the next states and outputs of a system are known for each set of inputs that trigger events. Thus, a system that is deterministic is event deterministic. Although it would be difficult for a system to be deterministic only for those inputs that trigger events, this is plausible, and so event determinism may not imply determinism.

It is interesting to note that while it is a significant challenge to design systems that are completely event deterministic, and as mentioned it is possible to inadvertently to end up with a system that is nondeterministic, it is also hard to design systems that are deliberately nondeterministic. This situation arises from the difficulties in designing completely random number generators. Deliberately nondeterministic systems would be desirable, for example, as casino gambling machines.

Finally, if in a deterministic system the response time for each set of outputs is known, then, the system also exhibits temporal determinism.
A side benefit of designing deterministic systems is that guarantees can be given that the system will be able to respond at any time, and in the case of temporally deterministic systems, when they will respond. This reinforces the association of control with real-time systems.
 

Thursday, November 29, 2012

Basic Real-Time Concepts -- Real-Time Definitions

0

Real-Time Definitions


Definition: A real-time system is a system that must satisfy explicit (bounded) response-time constraints or risk severe consequences, including failure.
 
What is a “failed” system? In the case of the space shuttle or a nuclear plant, it is painfully obvious when a failure has occurred. For other systems, such as an automatic bank teller machine, the notion of failure is less clear. For now, failure will be defined as the “inability of the system to perform according to system specification,” or, more formally:
 
Definition: A failed system is a system that cannot satisfy one or more of the requirements stipulated in the formal system specification.
 
Because of this definition of failure, precise specification of the system operating criteria, including timing constraints, is important. This matter is dis- cussed later.
 
Various other definitions exist for real-time, depending on which source is consulted. Nonetheless, the common theme among all definitions is that the system must satisfy deadline constraints in order t be correct. For example, an alternative definition might be:
 
Definition: A real-time system is one whose logical correctness is based
on both the correctness of the outputs and their timeliness.
 
In any case, note that by making unnecessary the notion of timeliness, every ystem becomes a real-time system.
 
Real-time systems are often reactive or embedded systems. Reactive systems are those in which scheduling is driven by ongoing interaction with their environment; for example, a fire-control system reacts to buttons pressed by a pilot. Embedded systems are those that are found in a system that is not itself a computer. For example, a modern automobile contains many embedded computers that control fuel injection, airbag deployment, braking, climate control, and so forth. Today, many household items such as televisions, stereos, washing machines, even toys contain embedded computers. It is clear that sophisticated systems such as aircraft, spacecraft, and industrial machines must contain many embedded, reactive computer systems.
 
The three systems mentioned earlier satisfy the criteria for a real-time system precisely. An aircraft must process accelerometer data within a certain period that depends on the specifications of the aircraft; for example, every 10 milliseconds. Failure to do so could result in a false position or velocity indication and cause the aircraft to go off-course at best or crash at worst. For a nuclear reactor thermal problem, failure to respond swiftly could result in a meltdown. Finally, an airline reservation system must be able to handle a surge of passenger requests within the passenger’s perception of a reasonable time (or before the flights leave the gate). In short, a system does not have to process data in microseconds to be considered real-time; it must simply have response times that are constrained.
 
When Is a System Real-Time?
 
It can be argued that all practical systems are real-time systems. Even a batch-oriented system – for example, grade processing at the end of a semester or a bimonthly payroll run – is real-time. Although the system may have response times of days or weeks (e.g., the time that elapses between submitting the grade or payroll information and issuance of the report card or check), it must respond within a certain time or there could be an academic or financial disaster. Even a word-processing program should respond to commands within a reasonable amount of time (e.g., 1 second), or it will become torturous to use. Most of the literature refers to such systems as soft real-time systems.
 
Definition: A soft real-time system is one in which performance is degraded but not destroyed by failure to meet response-time constraints.
Conversely, systems where failure to meet response-time constraints leads to complete and catastrophic system failure are called hard real-time systems.
 
Definition: A hard real-time system is one in which failure to meet a single deadline may lead to complete and catastrophic system failure
 
 
Firm real-time systems are those systems with hard deadlines where some arbitrarily small number of missed deadlines can be tolerated.
 
Definition: A firm real-time system is one in which a few missed deadlines
will not lead to total failure, but missing more than a few may lead to
complete and catastrophic system failure.


As noted, all practical systems minimally represent soft real-time systems. Table above gives a sampling of hard, firm, and soft real-time systems.
 
Note that there is a great deal of latitude for interpretation of hard, firm, and soft real-time systems. For example, in the automated teller machine, missing too many deadlines will lead to significant customer dissatisfaction and potentially even enough loss of business to threaten the existence of the bank. This extreme scenario represents the fact that every system can probably be characterized any way – soft, firm, or hard – real-time by the construction of a supporting scenario. The careful construction of systems requirements (and, hence, expectations) is the key to setting and meeting realistic deadline expectations. In any case, it is a principal goal of real-time systems engineering to find ways to transform hard deadlines into firm ones, and firm ones into soft ones.
 
The Nature of Time
 
It is typical, in studying real-time systems, to consider the nature of time, because deadlines are instants in time. But the question arises, “Where do the deadlines come from?” Generally speaking, deadlines are based on the underlying physical phenomena of the system under control. For example, in animated displays, images must be updated at approximately 30 frames per second to provide continuous motion, because the human eye can resolve updating at a slower rate. In navigation systems, accelerations must be read at a rate that is based on the maximum velocity of the vehicle, and so on. In some cases, systems have deadlines that are imposed on them that are based on nothing less than guessing or on some forgotten and since eliminated requirement. The problem in these cases is that the undue constraints may be placed on the systems. This is a primary maxim of real-time systems design – to understand the basis and nature of the timing constraints, so that they can be relaxed if necessary.
 
Many real-time systems utilize time-stamping and global clocks for synchronization, task initiation, and data marking. It must be noted, however, that clocks keep inaccurate time; even the official U.S. atomic clock must be adjusted. More- over, there is an associated digitization error with clocks, which may need to be considered when using them for data time-stamping.
 
 

Monday, November 26, 2012

Basic Real-Time Concepts --- Systems Concepts

0

Systems Concepts


The hardware of the general-purpose computer solves problems by repeated execution of macroinstructions, collectively known as software. Software is traditionally divided into system programs and application programs.

System programs consist of software that interfaces with the underlying computer hardware, such as schedulers, device drivers, dispatchers, and programs that act as tools for the development of application programs. These tools include compilers, which translate high-order language programs into assembly code; assemblers, which translate the assembly language into a special binary format called object or machine code; and linkers, which prepare the object code for execution. An operating system is a specialized collection of system programs that manage the physical resources of the computer. As such, a real-time operating system is a systems program.

Application programs are programs written to solve specific problems, such as payroll preparation, inventory, and navigation. Certain design considerations play a role in the design of certain systems programs and application software intended to run in real-time environments.

The notion of a “system” is central to software engineering, and indeed to all engineering, and warrants formalization.

Definition: A system is a mapping of a set of inputs into a set of outputs.

When the internal details of the system are not of interest, the mapping function can be considered as a black box with one or more inputs entering and one or more outputs exiting the system (see Figure A).

Every real-world entity, whether synthetic or occurring naturally, can be modeled as a system. In computing systems, the inputs represent digital data from hardware devices and other software systems. The inputs are often associated with sensors, cameras, and other devices that provide analog inputs, which are converted to digital data, or provide direct digital input. The digital output of the computer system can be converted to analog outputs to control external hardware devices such as actuators and displays (Figure B).
Modeling a real-time system, as in Figure B, is somewhat different from the more traditional model of the real-time system as a sequence of jobs to be scheduled and performance to be predicted, which is very similar to that shown in Figure C. The latter view is simplistic in that it ignores the fact that the input sources and hardware under control are complex. Moreover, there are other, sweeping software engineering considerations that are hidden by the model shown in Figure C.
Look again at to the model of a real-time system shown in Figure 1.2. Note that in its realization there is some delay between presentation of the inputs (stimulus) and appearance of the outputs (response). This fact can be formalized as follows:

Definition: The time between the presentation of a set of inputs to a system (stimulus) and the realization of the required behavior (response), including the availability of all associated outputs, is called the response time of the system.

How fast the response time needs to be depends on the purpose of the system.
 

Sunday, November 25, 2012

Voltage and Resistance

0

The ability of an energy source (e.g. a battery) to produce a current within a conductor may be expressed in terms of electromotive force  (e.m.f.). Whenever an e.m.f. is applied to a circuit  a potential difference  (p.d.) exists. Both e.m.f. and p.d. are measured in volts (V). In many practical circuits there is only one e.m.f. present (the battery or supply) whereas a p.d. will be developed across each component present in the circuit.
The conventional flow of current in a circuit is from the point of more positive potential to the point of greatest negative potential (note that electrons move in the  opposite  direction!). Direct current results from the application of a direct e.m.f. (derived from batteries or a d.c. power supply). An essential characteristic of these supplies is that the applied e.m.f. does not change its polarity (even though its value might be subject to some fluctuation).

For any conductor, the current flowing is directly proportional to the e.m.f. applied. The current flowing will also be dependent on the physical dimensions (length and cross-sectional area) and material of which the conductor is composed.
The amount of current that will flow in a conductor when a given e.m.f. is applied is inversely proportional to its resistance. Resistance, therefore, may be thought of as an opposition to current flow; the higher the resistance the lower the current that will flow (assuming that the applied e.m.f. remains constant).

Conductors and Insulators

0

Electric current is the name given to the flow of electrons  (or negative charge carriers). Electrons orbit around the nucleus of atoms just as the earth orbits around the sun.

Electrons are held in one or more  shells, constrained to their orbital paths by virtue of a force of attraction towards the nucleus which contains an equal number of protons(positive charge carriers). Since like charges repel and unlike charges attract, negatively charged electrons are attracted to the positively charged nucleus. A similar principle can be demonstrated by observing the attraction between two permanent magnets; the two North poles of the magnets will repel each other, while a North and South pole will attract. In the same way, the unlike charges of the negative electron and the positive proton experience a force of mutual attraction.

The outer shell electrons of a conductor  can be reasonably easily interchanged between adjacent atoms within the  lattice  of atoms of which the substance is composed. This makes it possible for the material to conduct electricity. Typical  examples of conductors are metals such as copper, silver, iron and aluminium. By contrast, the outer shell electrons of an  insulator  are firmly bound to their parent atoms and virtually no interchange of electrons is possible. Typical examples of  insulators are plastics, rubber and ceramic materials.