Thursday, 31 March 2011

What Are The Advantages Of Process Sensors?

Factories nowadays, no matter what they produce, follow mostly an automated process through which things move forward from one stage to the other with minimum human intervention. Technology advancements have made it possible to have routinary tasks such as adjusting screws, classifying items according to their size or similar tasks to be done by machines without the need of having an operator worker intervening.
However, no matter how much we trust in the machines and how much we rely on their accuracy, it is still important to be vigilant over the production process and avoid living it unattended.

This is why there are process sensors that play an important role in aiding humans to verify that that automated manufacturing process is being followed without problems or inconvenient. Process sensors ensure that each machine is performing its task correctly and that the final output product meets all the expected requirements and conditions as regards humidity, shape, size, weight or whatever you need to control.
There are different types of process sensors, according to which part of the manufacturing process they have to supervise and what needs to be monitored.

Process sensors are very important because they provide very useful feedback about the condition of the manufacturing process in itself. Thus, the process can be run smoothly and operators can be warned easily if any corrective measure or adjustment needs to be implemented. These kind of supervising devices can be contact or contact-less, meaning that they do not need to touch the parts, environment or products they are checking but that anyway can give useful information to the system. The food industry is one of the main users of process sensors in the manufacturing process of crackers, cereals, potato chips, instant coffee, soya products, powder drinks, roast and ground coffee, tea and milk powders. They are also used in the paper industry, chemicals and animal food products industry.

As it can clearly be seen, process sensors are very advantageous for those industries whose processes can be automated but that, at the same time, require each step of the process to comply with certain requirements such as humidity level, size of the parts, their weight and so on. These devices are especially important as they help to perform and speed up a supervision that not only would be nearly impossible to do for the human eye but also that needs to be done at a specific pace so that the products that are being manufactured do not get delayed.

Why Thunderstorms Are an Amazing Phenomena

Thunderstorms are a phenomena quickly moving into the spotlight of science. Why is that, you ask? People have always felt that thunderstorms were good for the environment and brought cleansing, but now there's scientific evidence to back it up. Not only that, but it has been recently discovered that intense thunderstorms actually produce the most expensive substance on the planet. Platinum? No. It produces antimatter. For those of you who didn't watch enough Star Trek as a child, antimatter is an extremely volatile, immensely powerful substance used to power the space ships of the future. At our current state of technology, antimatter is extremely expensive to produce, so naturally scientists are seeking ways of capturing it from thunderstorms.

This definitely brings a new meaning to mind for the phrase "storm chaser". I can just see groups of scientists scouring the globe in blimps trying to capture antimatter being generated and destroyed from thunderstorms. Though I doubt that it will ever happen, it's fun to imagine. At present, antimatter costs so much to create and is so difficult to store that it's completely unfeasible to use it for anything practically. I'm sure a lot of research is being done to find out if there's any way at all possible to capture the antimatter generated from a bolt of lightning. If that ever happens, I'll be more than a little impressed. But let's move on.

Though your grandparents could have told you a long time ago, it has now been scientifically noted that thunderstorms have some pretty powerful healing properties for the environment. The whole environment is cleansed during a thunderstorm. The main reason for this is because of the different molecules that are created during a thunderstorm, including ozone and hydrogen peroxide. Because of the intensity of the lightning and other factors, hydrogen peroxide molecules and ozone molecules are generated which help to cleanse the environment of disease and also to promote new growth. Most people are unaware that hydrogen peroxide is actually quite beneficial to both the leaves and the root systems of plants, especially developing plants? So a thunderstorm not only cleanses the environment of harmful disease, it also promotes new growth. Perhaps that is the reason that the environment seems so fresh, vibrant, and crisp after a thunderstorm.

But the thing that I find most interesting is that science has actually attempted to duplicate the effects of thunderstorms in a creative way. NASA engineers used thunderstorms as the basis for developing an air purifier technology for use in their space stations. They developed an air purifier technology that utilizes hydrogen peroxide molecules and other molecules generated during a thunderstorm and attempted to magnify the effects to create an air purifier system that completely cleanses the environment of disease and other contaminents and leave the air as permanently fresh and crisp as it is after a really intense thunderstorm. What a cool application of both science and nature. Much of the time, science seems to seek artificial solutions that are distinct from solutions already present in nature. Very rarely do they attempt to mimic the natural. It would make sense to me that NASA needed to develop the very best air purifier out there for use in their space stations. Up in space, air is just about the most precious commodity, and keeping it clean and fresh is definitely a priority, one would think.

Wednesday, 30 March 2011

Non-Invasive Blood Pressure in Mice and Rats

Over the past 20 years, research scientists have attempted to non-invasively measure mice and rat blood pressure (BP) with varying degrees of success.
The ability to accurately and non-invasively measure the systolic and diastolic blood pressure, in addition to the heart pulse rate and other blood flow parameters in rodents, is of great clinical value to the researcher.

Invasive Blood Pressure, Rat and Mouse Measurement

Direct blood pressure, an invasive surgical procedure, is the gold standard to compare the accuracy of non-invasive blood pressure (NIBP) technologies. Direct blood pressure should be obtained on the rodent's carotid artery when comparing to NIBP. "Validation in Awake Rats of a Tail Cuff Method for Measuring Systolic Pressure", Bunag, R.D., Journal of Applied Physiology, Vol 34, Pgs 279-282, 1973.

Radiotelemetry, a highly invasive surgical procedure, is a very reliable blood pressure technology and is also utilized to compare the accuracy of NIBP technologies. Telemetry involves the implantation of radio transmitters in the rodent's body. This technique is well validated and has excellent correlation with direct blood pressure.

The advantage of implantable radio telemetry is the ability to continuously measure rat and mouse blood pressure in free moving laboratory animals.

The disadvantages of radiotelemetry are: (1) morbidity associated with the initial surgical implantation of the transmitter; (2) morbidity associated with surgery required to replace the battery, which has a short battery life; (3) increase in the animal's level of stress, especially mice, in relationship to the large, heavy transmitters (2004, ATLA, 4th World Congress, Einstein, Billing, Singh and Chin); (4) abnormal behavior since the animal cannot have social interaction due to the current technology requiring the implanted animal to be isolated, one animal per cage; (5) inability to perform high throughput screening; (6) high cost of the initial equipment set-up and the expensive transmitters that require frequent factory maintenance; (7) cost of material and human resources relating to ongoing surgeries; and (8) the lack of a competitive market resulting in high product and servicing costs.

Non-Invasive Blood Pressure, Rat and Mouse Measurement

The NIBP methodology consists of utilizing a tail cuff placed on the tail to occlude the blood flow. Upon deflation, one of several types of NIBP sensors, placed distal to the occlusion cuff, can be utilized to monitor the rat BP. There are three (3) types of NIBP sensor technologies: photoplethysmography, piezoplethysmography and Volume Pressure Recording. Each method will utilize an occlusion tail-cuff as part of the procedure.

1. Photoplethysmography

The first and oldest sensor type is Photoplethysmography (PPG), a light-based technology. The purpose is to record the first appearance of the pulse while deflating the occlusion cuff or the disappearance of pulses upon inflation of the occlusion cuff. Photoplethysmography utilizes an incandescent or LED light source to record the pulse signal wave. As such, this light-based plethysmographic method uses the light source to illuminate a small spot on the tail and attempts to record the pulse.

Photoplethysmography (PPG) is relatively inaccurate since the readings are based solely on the amplitude of a single pulse and can only imprecisely measure the systolic blood pressure and the heart beat. There are many limitations to a light-based technology, such as: (1) over-saturation of the BP signal by ambient light; (2) extreme sensitivity to the rodent's movement (motion artifact); and (3) the difficulty in obtaining adequate mice blood pressure signals in dark skinned rodents (Pigmentation Differentiation). Light-based sensors also cause tail burns from close contact and prolonged exposure.

Diastolic blood pressure cannot be measured by photoplethysmography since the technology records only the first appearance of the pulse. If the diastolic BP is displayed on the photoplethysmographic instrumentation, it is only an estimation that is calculated by a software algorithm rather than a true measurement.

Additional variability and inaccuracy occurs in PPG devices that rely on obtaining readings during occlusion cuff inflation.

Occlusion cuff length is also another source of variability and inaccuracy. Occlusion cuff length is inversely related to the accuracy of the blood pressure. Long cuffs, predominantly in most photoplethysmographic devices, record lower than the actual blood pressure measurements.

These limitations severely compromise the consistency, dependability and accuracy of the NIBP measurements obtained by devices that utilize light-based/LED photoplethysmographic technology.

The photoplethysmography method correlates poorly with direct blood pressure measurements and is the least recommended sensor technology for NIBPe in rodents, especially mice.

2. Piezoplethysmography

The second NIBP sensor technology is piezoplethys-mography. Piezoplethysmography and photoplethysmography require the same first appearance of a pulse in the tail to record the systolic blood pressure and heart rate.

Both plethysmographic methods have similar clinical limitations. Whereas photoplethysmography uses a light source to attempt to record the pulse signal, piezoplethysmography utilizes piezoelectric ceramic crystals to do the same. From a technical point of view, piezoplethysmography is far more sensitive than photoplethysmography since the signal from the sensor is the rate of change of the pulse rather than just the pulse amplitude. Therefore, even extremely small mice with high velocity pulses will generate a sufficient signal to be detected with simple amplifiers.

Piezoelectric sensors are more accurate than light-based/LED sensors but the same plethysmographic limitations continue to produce inaccuracies in blood pressure measurements. On a positive note, the skin pigment of the rodent is not a measurement issue with piezoplethysmography as with photoplethysmography.

Although piezoplethysmography is better than photoplethysmography, both non-invasive tail-cuff blood pressure technologies correlate poorly with direct blood pressure measurements.

3. Volume Pressure Recording

The third sensor technology is Volume Pressure Recording (VPR). The Volume Pressure Recording sensor utilizes a specially designed differential pressure transducer to non-invasively measure the blood volume in the tail. Volume Pressure Recording will actually measure six (6) blood pressure parameters simultaneously: systolic, diastolic, mean, heart pulse rate, tail blood volume and tail blood flow.

Since Volume Pressure Recording utilizes a volumetric method to measure the blood flow and blood volume in the tail, there are no measurement artifacts related to ambient light; movement artifact is also greatly reduced. In addition, Volume Pressure Recording is not dependent on the animal's skin pigmentation. Dark-skinned animals have no negative effect on Volume Pressure Recording measurements. Very small, 10-gram C57/BL6 black mice are easily measured by the Volume Pressure Recording method.

Special attention is afforded to the length of the occlusion cuff with Volume Pressure Recording in order to derive the most accurate blood pressure readings.

Volume Pressure Recording is the most reliable, consistent and accurate method to non-invasively measure the blood pressure in mice as small as 10 grams to rats greater than 950 grams.

In an independent clinical validation study conducted in 2003 at Yale University, New Haven, Connecticut, Volume Pressure Recording correlated 99 percent with direct blood pressure:

"Volume Pressure Recording is excellent. It is very accurate and dependable. We performed experiments on temperature-controlled, adult rats and the non-invasive blood pressure measurements showed almost perfect correlation with invasive blood pressure measurements. We are very pleased with the results."

Numerous published research papers are available validating the accuracy, reliability and consistency of Volume Pressure Recording. See the Clinical Bibliography section.

Rodent Holders, Rat and Mouse

The ideal animal holder should comfortably restrain the animal, create a low-stress environment and allow the researcher to constantly observe the animal's behavior. A trained rat or mouse can comfortably and quietly remain in the holder for several hours.

It is very beneficial to incorporate a darkened nose cone into the rodent holder to limit the animal's view and reduce the level of animal stress. The animal's nose will protrude through the front of the nose cone allowing for comfortable breathing. The tail of the animal should be fully extended and exit through the rear hatch opening of the holder.

The proper size animal holder is essential for proper blood pressure measurements. If the holder is too small for the animal, the limited lateral space will not allow the animal to breathe in a relaxed fashion. The animal will compensate by elongating its body, thereby creating a breathing artifact. A breathing artifact will cause excessive tail motion and undesirable blood pressure readings.

Animal Body Temperature, Rat and Mouse

A NIBP system should be designed to comfortably warm the animal, reduce the animal's stress and enhance blood flow to the tail.

The rodent's core body temperature is very important for accurate and consistent blood pressure measurements. The animal must have adequate blood flow in the tail to acquire a blood pressure signal. Thermo-regulation is the method by which the animal reduces its core body temperature, dissipates heat through its tail and generates tail blood flow.

Anesthetized animals may have a lower body temperature than awake animals so additional care must be administered to maintain the animal's proper core body temperature. An infrared warming blanket or a re-circulating water pump with a warm water blanket is the preferred method to maintain the animal's proper core body temperature. The animal should be warm and comfortable but never hot. Extreme care must be exercised to never overheat the animal.

Warming devices such as hot air heating chambers, heat lamps or heating platforms that apply direct heat to the animal's feet are not advisable to maintain the animal's core body temperature. These heating devices will overheat the animal and increase the animal's respiratory rate, thereby increasing the animal's stress level. These conditions will elicit poor thermo-regulatory responses and create inconsistent and inaccurate blood pressure readings.

Environmental Temperature

The proper room temperature is essential for accurate blood pressure measurements. The room temperature should be at or above 26C. If the room temperature is too cool, such as below 22C, the animal will not thermo-regulate, tail blood flow will be reduced and it may be difficult to obtain blood pressure signals. A cold steel table or a nearby air conditioning duct are undesirable during animal testing.

Animal Preparation

The animal should be placed in the holder at least 10 to 15 minutes prior to obtaining pressure measurements. Acclimated animals will provide faster BP measurements than non-acclimated animals. Proper animal handling is critical to consistent and accurate blood pressure measurements. A nervous, stressed animal may have diminished circulation in the tail.

Most rodents will quickly adapt to new conditions and feel comfortable in small, dark and confined spaces. Training is not necessary to obtain accurate blood pressure readings, however, some researchers prefer training sessions. Rodents can easily be trained in approximately three days, 15-minutes each day before beginning your experiment.

The animal should be allowed to enter the holder freely. After the animal is in the holder, adjust the nose cone so the animal is comfortable but not able to move excessively. The animal should never have its head bent sideways or its body compressed against the back hatch. The animal's temperature should be monitored throughout the experiment.

Conclusion

Tail-cuff NIBP measurements can be consistent, accurate and reproducible when studying awake and anesthetized mice and rats. In addition, multiple animal testing is very cost-effective for large scale, high throughput screening. Care must be exercised to properly handle the animals. Training the animals and monitoring the animal's temperature may also be beneficial.

The volumetric pressure recording method provides the highest degree of correlation with telemetry and direct blood pressure and is clearly the preferred tail-cuff sensor technology.
NIBP devices that utilize Volume Pressure Recording are a valuable tool in research and will continue to be beneficial in many study protocols. The main advantages are: (1) they require no surgery; (2) they are significantly less expensive than other blood pressure equipment, such as telemetry; (3) they can screen for systolic and diastolic BP changes over time in large numbers of animals; and (4) they provide the researcher with the ability to obtain accurate and consistent blood pressure measurements over time in long-term studies.

The Size That Knocked Pluto Out of the Planets

Pluto is no longer the last planet in the Solar System, but still, it would be interesting to know the specs of this universal object that has once been part of the system where our planet Earth belongs to. From ages way back, the Pluto has already been known to be the smallest planet ever in the Solar System. In fact, because of its almost minute size compared to the other planets, Pluto has, back then, earned the title of being a dwarf planet in the Solar System. But now that it has been realized that the planet that we have once known as Pluto is really just a piece of giant asteroid, then the small size was justified.

On the Size of Pluto

When it comes to the diameter, Pluto is just about 70 percent of the Earth's Moon. Going into specifics, that would be a span of about 2,390 kilometers, or an equivalent that spans about 1,400 miles of land area. When it comes to the area covered by its surface, the measurement would be equal to about 0.033 percent the size of Earth, which is equivalent to a dimension of 1.665 kilometers by 107 kilometers. This surface is said to have been made up of ice and rock. Majority of the composition is made out of rock, which makes up 70 percent of the entire land surface at Pluto.

And then, when it comes to the core, this would be made up largely of ice. This icy core has a diameter spanning 1,700 kilometers, making up about 70 percent of the entire diameter of the planet. There are even ideas that sprang up that, since the core is largely made of ice, there could have been water just above it, possibly at 180 kilometers just below the mantle or the middle portion of the planet.

As per the atmosphere, studies show that the atmosphere is made up largely of nitrogen, carbon monoxide, and methane. And, since Pluto is very far away from the sun, the temperature could get down to as much as - 320 C. At temperatures like this, would it even be possible to melt ice on Pluto?

On the other hand, when it comes to volume, Pluto is said to have one at 6.39 by 109 cubed kilometers. There is gravity in Pluto, too, just as it is present on Earth, but at a very low 0.658 meters square per second.

Tuesday, 29 March 2011

School Science Lab Equipment - What's Best In The Classroom?

What Should Be Allowed In School Science Labs?

Very often, one of the more exciting subjects for students to learn about in school is science. After all, while it can be pretty difficult to do any kind of exciting hands-on activity in a history, or literature, or math classroom, it is possible to demonstrate a number of different fascinating principles right there in the science classroom, provided that the school has the equipment handy. That way, instead of just learning about science in theory, the kids can experience the subject firsthand in a much more memorable and interesting way.

Of course, this kind of situation is necessarily limited by the equipment which is available in the classroom for the students and teachers to use, which begs an important question: what, exactly, should teachers have in their school science classrooms and/or labs? It is an important point to consider, because one the one hand having a lot of equipment around can be a fantastic resource for students who might otherwise not be extremely interested in science to get a chance to see unusual phenomena in action.

On the other hand having a lot of that kind of equipment can get expensive pretty fast, to say nothing of the fact that if we are not careful about deciding which equipment goes in the classroom, students might be dealing with more than they can handle, which can create a dangerous situation.

When we think of what sort of equipment there should be in an ideal science classroom, there is a lot to consider, because there are many different sciences. One of the first to consider is biology. In a biology classroom, it may be helpful to students to have sample of living things. This is a great opportunity to grow plants in the classroom, because students will enjoy watching them grow, and they are inexpensive. Also, unless the plants are poisonous, danger to the students is no concern.

Beyond that there is the question bringing creatures into the class. These can be trickier, but it is important not have anything that requires too much space or attention. Any animal that can live in an aquarium or terrarium, and can be safely left over a long weekend with some food and water, is probably fine.

Then there is the question of equipment in chemistry or physics classrooms. In these there is the potential for fire, or fumes, or a number of other dangerous situations that could arise if the students, or the teachers, are not careful. The best policy is one that maintains an optimal balance of safety, cost, and of course educational value. Although it is certainly tempting to wish for everything under the sun in your science lab, teachers must bear in mind which specific pieces of equipment will actually get used the most often and to greatest effect.

A well-equipped science classroom is a great resource, and as long as teachers and school administrators take the time to make informed decisions about the sort of equipment to buy, they should make every effort to get the best and most practical lab equipment possible.

3/15/2011

Busting the Magnesium Fire Myth

There is concern over the use of an ingredient in fireworks (magnesium) to create lightweight parts for a car, sporting equipment, medicine, etc. True, magnesium reacts to fire with very bright and extremely high heat flame. It can also have an explosive reaction with water. The explosive nature of the metal has naturally given the magnesium alloy a misguided reputation of exploding while in use in a car or piece of aerospace equipment. Engine parts made of magnesium alloys have been the most unfortunate, as engines are known to get extremely hot from normal operation.

How NOT to start a magnesium fire
The high heat from the engine is also a concern for car buyers. As the engine runs, it reaches very high temperatures. Dust and shavings from the magnesium cast block can potentially ignite in such an environment. However, the cooling systems built into the cars using the magnesium engines prevent the high heat from becoming an issue. Proper maintenance can catch a problem that will produce metal shavings before they become a fire hazard as well.

However, magnesium alloys do not ignite unless exposed to flame or heat that is much hotter than that produced by a working engine. The metal burns at temperatures in excess of 3,600 degrees Fahrenheit or 2,000 degrees Celsius. However, the magnesium is not a metal that ignites when in the air. Furthermore, you can't start a fire by throwing water on a magnesium engine. The fire must be ignited in order to cause the magnesium to burn.

Putting out magnesium fires
It is true that magnesium does cause the hydrogen and oxygen molecules in water to combust, only adding to the fire. To extinguish a magnesium fire, use sodium chloride (also known as salt). The special chemical fire extinguishers recommended for magnesium fires use a powder made of table salt to smother the fire, without causing another chemical combustion. These extinguishers are considered Class D fire extinguishers and are a good idea for owners of magnesium block engines.

Other flammable metals in auto manufacturing
Magnesium has a bad name because of its flammability, but it is not the only flame encouraging metal in the auto making world. Aluminum, also used extensively in car bodies, rusts easily over time. The rusted metal creates thermite, which burns just as hot as magnesium. The lithium batteries being developed for electric cars are also highly flammable. The lithium is actually combustible with contact with water, and can react to moisture in the air.

The myth of cars exploding because of their magnesium engines overheating or water getting under the hood are completely false. They are myths based on the chemical properties of the metal magnesium that have been misunderstood.

nanoMAG, LLC is a subsidiary of Thixomat, Inc. a company with more than 20 years experience in the research, development, and marketing of technologies for the production of products utilizing magnesium alloy. Based in Ann Arbor, Michigan, nanoMAG supplies precision magnesium sheet and short-run specialty alloys to diverse industries. Contact nanoMAG at http://www.nanomag.us/.

Article Source: http://EzineArticles.com/?expert=Nancy_Millani

Monday, 28 March 2011

Zero Point Field Energy

What is the zero point field?

It is theorized that the zero-point field exists in a vacuum or empty space at a temperature of absolute zero. At absolute zero Kelvin, all the heat or thermal radiation is absent. Under this condition, the background vacuum energy is zero point or reference for all processes.

Theoretically, Zero Point Field against the sea of electromagnetic radiation that is homogeneous and isotropic, or the same in all directions.

Practical implications of the zero-point field

In the zero point, or, ground state, the system is at a low energy state. The residual energy in the ground state, is called Zero Point Energy (ZPE).

The exciting thing about ZPE that in 1913 the German physicist and Nobel Prize winner Otto Stern, along with Albert Einstein assumed that this, residual energy can be used as an alternative form of energy.

Just imagine the possibilities if we could use a surprising amount of free energy and use it to power our homes and industries.

Zero point field Debate

The free energy suggested the existence of the zero point field, but there are a number of issues arising from this theory, such as:

Does free energy really exist?
Can it be used?
How much will it cost to use?
It is as abundant and efficient as its proponents claim?

Physics in research laboratories and universities are exploring the production of this vast reservoir of free energy. This may be the main source of alternative energy future.

Zero field Point and Free Energy device

Practical application of techniques of zero point field, and ZPE is the free energy of the device or devices for the collection and transfer of energy from a source that science is not yet known. We can also think of it as a device that collects energy at no cost. Free energy device is a vivid example of the legendary perpetual motion machine.

For centuries, people have tried to invent a perpetual motion machine, or machine which, once switched on, continues to behave forever without energy input. The machine will never run out of energy.

No such machine exists today, but it does not mean that it's impossible. Perhaps future technology will be able to use the free energy at zero field. This is certainly something that everyone is looking forward to find out.

Discovering the Applications of Infrared Spectrometer

Have you ever heard about infrared spectrometer? This instrument is mostly applied in various aspects of science and technology, especially in chemistry and medical field. In addition, it is also used in forensic and crime analysis. Now let's explore your knowledge on this instrument with the paragraphs below.

Sometimes you will hear the term of 'infrared spectroscopy' for this instrument. It is basically used to identify various substances, depending on their ability to absorb infrared wavelengths. In short, this machine is used to measure the amount of infrared wavelength on substances. All substances have molecules which can change infrared radiation into heat. The machine generates infrared rays and directs them to the substance being tested.

Now we are going to discuss the uses of infrared spectrometer in chemistry field. This instrument is used to test the level of polymerization in chemical substances or compounds. Polymerization is some kind of reaction which occurs when monomer molecules of the substance construct polymer chains. Within the process, the machine is measuring the quantity of molecule bonds which can change. This testing is useful to identify the uses of various substances for human living.

Furthermore, you can find infrared spectroscopy in health industries, especially in medicine manufacturers. It is very useful for them in testing the quality of their products. It is important to make sure that the drugs produced are in best quality to assure the consumers' health. Anyone does not want to take any risks for taking untested medicines since it will be deadly when it is misused. The Food and Drug Administration (FDA) also takes the benefits of this instrument which uses near-infrared method rather than far-infrared one. With this instrument, the FDA can protect the consumers from potential danger of drugs.

Infrared spectrometer is also applied in crime investigation and forensic analysis. In this application computer databases will be needed to support the performance of infrared spectrometer. The computer database will save or record some known graphs which have been absorbed by the infrared. It is mostly used in the scene of the crime to find evidence. Now you have known the uses of infrared spectrometer. Hopefully this article can help you.

Sunday, 27 March 2011

The Pathway Of Food

The mouth performs many functions in the digestion of food. Besides chewing food to reduce it to smaller particles, the mouth also senses the taste of the foods we consume. When the tongue tastes the food through the use of its taste bud, it identifies foods on the basis of their specific flavors. Sweet, sour, salty, bitter, umami and, perhaps, the tastes of water and fat comprise the taste sensations we experience. Surprisingly, the nose and our sense of smell greatly contribute to our ability to sense the taste of food. When we chew food, chemicals are released that stimulate the nasal passages. Thus, it makes perfect sense that when we are sick and our noses are stuffed up and congested, even our most favorite foods will not taste as good as they normally do.

Once we have established (or even begin to anticipate) the taste of the food in our mouth the first step that signals the rest of the digestive system to prepare for the digestion of food, is the production of saliva. In the mouth, salivary glands produce saliva, which functions as a solvent so that food particles can be further separated and tasted. In addition, saliva contains a starch-digesting enzyme, salivary amylase. This and other enzymes are a key part of digestion. Saliva and your teeth are essential in breaking down your food for digestion. If you don't chew your food well enough, then you are making it much harder on the rest of your digestive system. If you have ever felt nauseous or felt stomach pain, then it may be caused by a lack of chewing.

After you are done chewing, mucus, which is another component of saliva, makes it easy to swallow a mouthful of food. The food then travels to the esophagus where it goes through the rest of the digestive system, and comes in contact with a wide variety of digestive enzymes.

When it comes to the digestive systems' digestive enzymes, the pancreas and the small intestine produce the most digestive enzymes; however, the mouth and the stomach also contribute their own enzymes to digestion. The mouth is an essential part of the digestive system, but in order for you to completely digest everything you need all the parts of the digestive system working in perfect harmony. If you want to learn more about how you digest your food, feel free to look up some more information in a science textbook and expand your knowledge.

Random Variables and Probability Distributions

Random Variable:

A random variable is also known as a stochastic variable and is a variable whose value is unknown. It can also be defined as a function that associates a special numerical value with each outcome of an experiment e.g. tossing a coin may either give a head or a tail, here denoting X to represent all heads is what a random variable is. Other examples that will give you an easy understanding include: throwing two dice; X=sum of the numbers facing up i.e. among the numbers 1-6, throw a dice over and over until you get a six; X= the number of throws. Also without laying much stress on the die, we can have a more practical one i.e. the number of goals scored by a soccer team player X= the number of goals scored by the player in the season, here X maybe 0,1,2,3...

There are two types of random variables namely: discrete and continuous. A discrete random variable has a set of countable numbers (integers) with each value in the range having a probability of either zero or greater than zero. Example of a discrete random variable is taking the number of dollars in a randomly chosen bank account. Discrete random variable is further divided to either finite or infinite random variable. Finite discreet can only take known many values like the outcomes of rolling a die. Infinite discreet can take unlimited number of values like the count of the stars in the whole universe.
Continuous random variable contains a set of values that are completely uncountable. It can take any values within the range; a good example is the measure of your height or your current temperature.

Probability distribution of a random variable:

A probability distribution works hand in hand with the random variable interval either discrete or continuous to determine the likely hood of an event falling within a particular interval. In case of a continuous variable the probability distribution will be used to describe the range of all possible values that the random variable can get and the probability of the value of a measurable subset in the same range.

Probability distribution is used because the use of simple numbers to describe a quantity may turn out to be inadequate. For instance, the spread of a variable in almost any measurements i.e. durability of metals, traffic flow, people's height or even sales growth will definitely apply some aspect of probability distribution. In physics many studies are linked to probability distribution like the kinetic properties of gases to the intense quantum mechanics of fundamental particles.

Various probability distributions are used in different applications. The most important and common ones that are frequently used include the Gaussian/normal distribution and the categorical distribution. The normal distribution has properties like taking a bell-shaped curve and it is used to approximate natural occurring distribution over real numbers. The categorical distribution on the other hand is used to give descriptions of experiments with finite and fixed numbers of outcomes. An example of a categorical distribution may be attributes to tossing a coin where possible outcomes are either a head or a tail with each having an equal chance of 0.5.

Saturday, 26 March 2011

Chili Pepper and Oleoresin Capsicum

By Kamran Loghman Kamran Loghman
Level: Basic PLUS

Kamran Loghman M.A., MCTC is a senior executive with significant experience managing business operations worldwide and now he is revolutionizing Business Consulting and Personal Development ...

Chili peppers (Capsicums) are different in taste, appearance, aroma, pungency, color, origin, growing process, DNA structure and toxicity levels. Capsicum consists of 38% pericarp, 2% inner sheath, 56% seeds, and 4% stalks. The property that separates the Capsicum family from other plant groups and the quintessence of the chili pepper is an alkaloid called Capsaicin (kap-sa-i-sin), an unusually powerful and pungent crystalline substance found in no other plant. Capsaicin is the source of pungency and heat in Capsicums.

Oleoresin Capsicum (OC) is the extract of the dried ripe fruits of Capsicums (Chili pepper) and contains a complex mixture of essential oils, waxes, colored materials, and several Capsaicinoids. It also contains resin acids and their esters, terpenes, and oxidation or polymerization products of these terpenes. One kilogram of Oleoresin Capsicum is equivalent to approximately 18 to 20 kilograms of good grade well-ground capsicum. This ratio may vary depending on the type of capsicum being processed.

The industrial spice Oleoresin extraction industry came into being with the development of Oleoresin process during the 1930s. The process essentially involves concentration of the oleoresin from capsicum plant by evaporation of solvent and, finally desolventisation to achieve the limits of residual solvent. Oleoresin, being a natural product, is thermally sensitive and the processing must be designed to minimize thermal degradation and preserve the full pungency. Conventional concentration and desolventisation techniques employ batch evaporation. In this process the oleoresin gets cooked over an extended length of time, which directly diminishes the Oleoresin quality.

There are several other ingredients in Capsicum that directly regulate the color of Oleoresin. One such ingredient is the pigment or the Carotinoid also called Capsanthin with a molecular formula of C40H56O3, and systematic name of (3R,3S,5R)-3,3'-Dihydroxy-?,k-caroten-6'-one. Capsanthin has a molecular weight of 584.85.

The author Kamran Loghman was the President and CEO of Zarc International, Inc., (1988-2005) the manufacturer of tear-gas type devices with worldwide distribution. Kamran is a nationally recognized expert in federal court proceedings and a top alcohol disorder researcher. He is an inspirational speaker, a scholar of Eastern philosophy and martial arts expert and historian. Presently Kamran is revolutionizing Business Consulting and Personal Development practices. Contact Kamran at http://www.kamranloghman.com/

Article Source: http://EzineArticles.com/?expert=Kamran_Loghman

Kamran Loghman - EzineArticles Expert Author This article has been viewed 31 time(s).
Article Submitted On: February 17, 2011

Applications of the Ideal Gas Law I

In this article, I describe how easily problems involving the ideal gas law can be solved. Again, I preach the same message: Start every problem with a statement of the appropriate fundamental principle. The trick here is a very simple one --- just start every problem solution with the ideal gas law expressed in a certain way. This approach is explained in some detail in the first solved problem. Its utility is then demonstrated in two additional problems in this article plus in two seemingly difficult problems in the next article. In all cases, there is no mention of Boyle's law, Charles' law, and the many other forms that the ideal gas law takes in its various applications. Every problem solution is approached the same way.

Since the text editor does not support certain mathematical symbols, I must use some unusual notation: (a) Almost every variable is represented by a capital letter with lower case letters used as subscripts. For example, the initial (i) pressure (P) is represented by Pi. (b) Powers are designated by a ^. As examples, "X squared" is written X^2, and "2.2 times 10 to the fifth" is written 2.2 x 10^5.

Problem. A container of volume 2.00 L holds 3.00 mol of an ideal gas at a temperature of 293 K and a pressure of 36.1 atm. The gas is compressed to a volume of 1.00 L while its temperature is held constant by placing the container in a large vat of water at 293 K. What is the new pressure of the gas?

Analysis. With the initial and final state of the gas designated by (i) and (u), we have from the ideal gas law

......................................Ideal Gas Law

..............................PiVi/NiTi = R = PuVu/NuTu.

In this particular case, a fixed amount of gas is compressed at constant temperature; so Ni = Nu and Ti = Tu, and the previous equation reduces to

.......................................PiVi = PuVu.

With Pi = 36.1 atm, Vi = 2.00 L, and Vu = 1.00 L,

................Pu = PiVi/Vu = (36.1 atm)(2.00 L)/(1.00 L) = 72.2 atm.

There is a simple method displayed in this problem that you can use over and over when applying the ideal gas law. If you are comparing two states (say states (i) and (u)), you just write

...................................Ideal Gas Law

........................PiVi/NiTi = R = PuVu/NuTu.

Then you ask: "What is the same for both states?". Those variables that are the same can then be cancelled on both sides of this equation. For example, in this problem the number of moles and the temperature are the same. Consequently, those two terms are cancelled, leaving

..............................................PiVi = PuVu.

Problem. The container of the previous problem springs a small leak, and gas slowly leaks out. The leak is discovered and repaired, and the pressure of the gas is measured to be 20.4 atm when the container is sitting in the water at 293 K. How much gas is left in the container?

Analysis. This time, Vi = Vu and Ti = Tu, so

...........................Ideal Gas Law

...................PiVi/NiTi = R = PuVu/NuTu

reduces to

.............................Pi/Ni = Pu/Nu,

and.....................Nu = PuNi/Pi = (20.4 atm)(3.00 mol)/(36.1 atm) = 1.70 mol.

Problem. A car tire is filled to a gauge pressure of 2.10 x 10^5 N/m^2 when the air temperature is 300 K and the atmospheric pressure is 1.00 x 10^5 N/m^2. The car is driven on the highway and the gauge pressure is found to increase to 2.30 x 10^5 N/m^2. Assuming the volume of the tire does not change, what is the temperature of the air inside the tire at the higher pressure?

Analysis. The absolute pressure inside the tire increases from (2.10 + 1.00) x 10^5 N/m^2 = 3.10 x 10^5 N/m^2 to (2.30 + 1.00) x 10^5 N/m^2= 3.30 x 10^5 N/m^2. The volume and number of moles of air do not change when the tire heats up. Using

.............................Ideal Gas Law

.....................PiVi/NiTi = R = PuVu/NuTu

with Vi = Vu and Ni = Nu, we have

..............................Pi/Ti = Pu/Tu.

Consequently,

.........................Tu = PuTi/Pi = (3.30 x 10^5 N/m^2)(300 K)/(3.10 x 10^5 N/m^2) = 319 K.

Notice how easily these problems were solved. I am certain that if you just get your students to use the approach I have outlined here, their ability to work with the ideal gas law will be greatly enhanced.

Dr William Moebs is a retired physics professor, who taught at two Universities: Indiana-Purdue Fort Wayne and Loyola Marymount University. You can see hundreds of examples illustrating how he emphasizes fundamental principles by consulting PHYSICS HELP.

Article Source: http://EzineArticles.com/?expert=William_Moebs

Spectrophotometer Calibration

What is A Spectrophotometer?

A spectrophotometer can be found in many research, biology, chemistry, and industrial laboratories. Spectrophotometers are used for research and data analysis in various scientific fields. Some of the main fields in which a spectrophotometer is used is physics, molecular biology, chemistry, and biochemistry labs. Typically, the name refers to Ultraviolet-Visible (UV-Vis) Spectroscopy.

What a spectrophotometer does is transmit and receive light. The spectrophotometer is used to analyze samples of test material by passing light through the sample and reading the intensity of the wavelengths. Various samples change the light in many different way and this allows researchers to acquire more information about the test material, by viewing the change in light behavior as it passes through the sample. These results must be accurate or the researcher will just be wasting time using a flawed instrument. The only way to ensure accuracy is to by performing a spectrophotometer calibration.

What Is Spectrophotometer Calibration?

Spectrophotometer calibration is a process in which a researcher or scientist uses a calibration standard to test the accuracy of the light source. This procedure is critical to ensure that the spectrophotometer is working properly and the measurements are correct. The calibration procedure varies slightly for different instruments. Most major manufacturers provide a detailed calibration guide in the owner's manual so that researchers know how to calibrate the equipment properly. Keeping a calibration log is also important to show when and who performed the last calibration.

Tools Used to Calibrate Spectrophotometer

Spectrophotometer calibration filters a.k.a neutral density filters are primarily used to calibrate various transmittance values and are derived from NIST (National Institute of Standards and Technology). Some of NIST's standards include SRM 2031, 2034, NIST930e, etc.).

Some spectrophotometer manufacturers recommend researchers to send the machine in to be calibrated. The problem with sending the machine in is the cost research time, shipping costs, and other various outside influences. It is best and most convenient to calibrate the spectrophotometer without sending it out of the lab.

Using Neutral Density Filters To Calibrate a Spectrophotometer

For decades liquid calibration standards have been used. Starting in 2010, solid state filters started replacing the liquid filters due to their ability of never having to be re-calibrated or replaced. Solid neutral density filters are also easy to handle and will not break if they are accidentally dropped or mishandled.

The solid state spectrophotometer calibration neutral density filters can test for photometric accuracy as well as stray light. To ensure 100% accuracy, testing is done at a minimum of five test points. These spectrophotometer calibration filters can be used to calibrate machines made by Thermo Scientific, Beckman Coulter, Hitachi, Perkin Elmer, Hewlett Packard, Agilent, Shimadzu and more.

How to Calibrate Using Neutral Density Filters

In spectrophotometer calibration, a reference is used to zero out the instrument. When using neutral density filters no special filters are required to zero out the instrument. To calibrate the device simply place the neutral density filter inside the spectrophotometer, zero out the settings, and run the instrument. The results must be compared to a calibration certificate that is supplied by the manufacturer of the calibration standards. If the results are within the tolerance range specified by the manufacturer, then the spectrophotometer is properly calibrated.

Before calibrating a spectrophotometer and to ensure proper readings, it must be allowed to warm up before use. Most models take about 10 minutes or so to warm up. Spectrophotometer calibration must not be done while the machines is warming up. If someone tries to calibrate spectrophotometer during the warmup phase it will throw the settings off.

More information about cuvette manufacturing and spectrophotometers can be found at PrecisionCells.com

Simcha Dov is a certified calibration specialist and has assisted in the designing and manufacturing of modern day neutral density filters.

Article Source: http://EzineArticles.com/?expert=Simcha_Dov

Friday, 25 March 2011

The Rise of Environmentally Friendly Flowers

Despite the economic woes that continue to affect the UK, interest in environmentally friendly and socially kind flowers continues to grow, with florists reporting strong demand for flowers with a 'friendly' history. The good news for consumers is that there's no shortage of flowers labelled with badges such as Fairtrade, Florverde or FFP, but what does it all mean for you next time you're placing your order for a flower delivery?

The labels explained.

Put simply, Fairtrade is a system that adds a premium to the purchase price, which is then passed directly into a workers fund. Both Colombia and Kenya have Fairtrade flower farms and it has to be said, it's a wonderful system that has made a big difference to the lives of workers on flower farms.

The flowers must meet international fair-trade standards which are set by the international certification body Fairtrade Labelling Organisations International. These standards are agreed through a process of research and consultation with key participants in the Fairtrade scheme, including producers themselves, traders, Workers' unions, academic institutions and labelling organisations such as the Fairtrade Foundation.

Fairtrade flowers include Roses, Carnations, Lisianthus, Lilies and Sunflowers.

However, due to the costs and regulation involved and the relatively small quantity of flowers left after the major supermarkets have taken their supply, seldom will you find Fairtrade flowers in an independent florist shop. It's even less likely you'll find them when placing an order for delivery by a retail florist. That's why other labels are so popular amongst florists.

Perhaps the most common is MPS, the Dutch floriculture environmental programme, which it's probably fair to say has done more to improve the environmental and social friendliness of flowers over the years than any other initiative.

MPS is a fully international certificate (unlike others which only concentrate on the Developing World), promoting best practice in four key areas, namely: environment, retail, quality and social. Particiapting growers minimise the damage of their production on nature, investing in complex groundwater heating systems for example, whilst ensuring the health and safety and terms of employment for staff is top notch.

MPS's key aims are to:
• Reduce energy consumption
• Reduce chemical crop protection
• Increase biological and integrated protection
• Reduce the use of artificial nutrition
• Reduce landfill
• Make better, more responsible use of water
• Encourage the use of biodegradable or recyclable packaging materials

No premium is charged for MPS labelled flowers and most florists should be able to tell you if their flowers are MPS accredited as it will be printed on the sleeve the flowers arrived in.

The next label to talk about is Fair Flowers Fair Plants (FFP), who, like MPS, fulfil strict environmental and social requirements and work across the globe.

Growers signed up to the scheme have to be certified in order to prove that they meet the criteria. Certification includes regular reports by the company and inspections of the company: scheduled and non-scheduled visits in which the company has to co-operate.

It has been established to stimulate the production and trade of flowers and plants cultivated in a sustainable manner: flowers and plants are cultivated in a way that respects people and the environment. Florists selling and delivering floral gifts will usually display a window vinyl or a logo on their website if they subscribe to this scheme. Once again, no premium is charged for FFP labelled flowers.

Florverde meanwhile is a label that applies to Colombian flower producers only, ensuring compliance with strict international social and environmental standards from planting to post harvest is met.

This initiative began in 1996 to develop best practices that could help ensure the quality of life of workers and their families, as well as environmental sustainability for generations to come. It was created as a strategic initiative for promoting sustainable floriculture with social responsibility at both the company and industry-wide levels.

When buying Colombian flowers from a florist, they should be able to advise you whether or not the flowers are from Florverde certified farms or not, as it will be marked on the box. Most Colombian flowers imported to the UK are from Florverde farms, simply because those that care tend to be the best farms growing the best products, as is the case with all of the labels we discuss in this article.

Other labels doing a similar job include Fiore Giusto in Italy, the Kenya Flower Council and FlorEcuador.

Why can't we buy UK grown flowers?

The next question to ask for those of you interested in environmentally friendly flowers is why can't we buy UK grown flowers?

The simple answer is we can and, when they're in season, we should. But the bottom line is there simply aren't enough British flowers, in either quantity or variety, to meet UK demand. And that can't be changed. Not only is there not enough land or workers, but even if there were, the cost, both in cash and environmental terms, would be huge.

Flowers produced closer to home aren't always greener either. Research published by Cranfield University back in December 2006 showed that carbon emissions from Kenyan flowers, including air freight, were nearly six times lower than for Dutch flowers. There are several reasons for this, including the fact that Kenya has optimal growing conditions and a ready supply of natural heat and light, while growers in Holland rely on significant inputs of gas and electricity.

What applies to Holland would apply to the UK. The truth is it is better to source the majority of flowers from overseas countries, leaving UK growers to concentrate on specialist crops. What's more, if consumers stop buying foreign imports it would have a major impact on the live of thousands of overseas workers and their families, undoing the good work of the friendly labels.

In short, all of the organisations mentioned above have helped the cut flower industry move forward in leaps and bounds in terms of environmentally friendliness and in the way flower farm workers are looked after, with healthcare, schooling and salaries all far improved from years ago.

So, next time you're buying flowers from a florist or organising a flower delivery to someone special, look out for the labels and take time out to check the friendly credentials of the flowers you're buying - you'll be surprised how good they are.

Considering Geothermal Energy? Here Are the Facts on Geothermal Energy

Geothermal energy is an alternate source of energy. This kind is generated deep within the surface of the earth. Geothermal is a term that comes from two Greek words, geo which means earth and thermos which means heat. Therefore with this power source we are able to generate electricity.

Although the energy is found far beneath the earth it comes to the surface by way of geysers, hot water springs and volcanoes. This power is accessed through the use of geothermal plants. They are designed in such a way that enables us to use the steam from the interior of the earth on the surface. It is this steam that is used to rotate the turbines that ultimately produce electricity.

Just like with any other alternative source of power this kind has its own advantages and disadvantages. As for the advantages, this energy source is environmentally friendly. This is because during production you do not burn any fossil fuels in the process. To add on to that the power plant where all the production takes place does not add to the greenhouse effect.

It is also non exhaustive because heat will always be produced beneath the surface of the earth. As long as that continues to happen the steam will continue to be used to generate power. This source is efficient in the sense that the costs are fairly low, especially in the long run.

When it comes to its shortcomings, the major one is the availability issue. The heat is constantly being generated deep within the earth but it is not easy to find locations where it can be harnessed. To add on to that, although in the long run you will not be using a lot of money investing in the venture does not come cheap.

The other disadvantage is that although fossil fuels are not burnt in the process, as you exploit the steam within the earth there are other things that could happen as well. One of these is that you might disturb the methane deposits beneath the surface. When this happens the methane could be released into the atmosphere thus polluting the environment.

To conclude, an interesting fact about geothermal energy is that the power source is so vast that it is more than that of gas, coal and uranium joined together. The two leading countries in the world that produce this power are Iceland and the Philippines. If you are one of those folks who are environmentally conscious, this is something that is worth considering.

Thursday, 24 March 2011

Nutraceutical Sciences - Are Vitamin Supplements Good For You?

There are many reasons why one would wish to take the output of the nutraceutical sciences industry and be grateful for it. Not only does their industry include anything from skin moisturises to bodybuilding powders; it is also responsible for one of the most lucrative industries in Western civilisation - vitamin supplements.

These have increasingly become an important part of Western diets. Many people take vitamin supplements because they have been instructed to do so by their doctor, specialist or consultant. Often dietary specialists will need to give recommendations to their patients to ensure that they are getting the necessary dietary requirements throughout their entire range of recommended foods. Sometimes, a patient will be unable to eat certain foods because of a condition that they are suffering from. These foods may contain essential vitamins and therefore, the nutraceutical sciences industry will have stepped in to manufacture a replacement supplement that does contain the necessary vitamins that the patient needs, but without the toxins that any food in its natural state might contain.

Of course, not everyone who has a list of vitamin supplements on their shopping list requires them as part of their diet. The majority of people who eat a well rounded, balanced diet don't need to supplement their diets with additional vitamins from the nutraceutical sciences industry. There is the possibility that a person can overdose on vitamins - like most things in life the phrase 'too much of a good thing' springs to mind.

Nutraceutical sciences can produce quality vitamin supplements which to many patients are a great way to ensure a balanced diet when it might otherwise be impossible to do so. For those with a balanced diet from normal and natural foods, vitamin supplements are not necessary, and they could, in certain circumstances, have the opposite to the desired effect.

Report Presentation - The Form of the Report

There are various different formats for laboratory reports in use. These vary according to the type of work being reported, the purpose of the work or the report and the recipients for which the report is intended. Reports of original research conducted to further scientific knowledge in a specific area require a different format from reports of quality control experiments conducted in a company laboratory and yet other formats may be required for student reports on experiments. A student may be expected to follow one format for recording experiments in a laboratory record book as they are being performed and a different format when writing up the information from the record book for a formal laboratory report later in the term. The format may also vary due to differences in the type of laboratory work which is done for the different subjects. Where the purpose of the experiment is to confirm or reject a hypothesis, the format of the report will differ from that of an investigation of the quality, composition or properties of a product. The purpose of most student laboratory reports is to indicate the students' understanding of aim, theory, laboratory procedures, etc., so would be emphasized in a format prescribed for these reports.

Finally, if the report is to be submitted to a lecturer, the format may be substantially different from that of a report submitted for publication in a professional journal, while a different format might be expected in a report to a government, an agency or private company. An obvious way in which laboratory report formats differ is the division of the report into sections. Whereas all laboratory reports can be thought of as consisting of four main parts (introduction, procedure, results and conclusion), there is considerable variation in the headings under which the information in each section of the report is to be written. Some of these headings may have an equivalent meaning: apparatus = materials, procedure = methods, data = results. In other cases, more specific headings are added where there is a need to draw attention to specific information in the report.

Example of the Form of a Report

1. Title needs to emphasize the nature of the work / investigation briefly (in less than 10 words) and accurately. This may also be called the Heading in a laboratory record book. State the date of performing the experiment.

2. Aim or Objective. This may be used in the place of the heading: Introduction. It is used to state clearly and concisely the purpose of the work.

3. Theory emphasizes the need to identify the background theory leading up to the experiment or the theory which the experiment is designed to illustrate or prove. This may also include a brief literature review to provide the status of current knowledge in the field.

4. Hypothesis - The hypothesis should be identified where the work is based on previous findings or involves the application of established theory to new situations. Note, however, that not all laboratory work is necessarily concerned with the testing of hypothesis.

5. Apparatus or Materials emphasize the need of the apparatus to be used and the way it is set up. It states the order of all steps to be taken.

6. Procedure or Methods emphasize the need to provide a step-by-step account of how the work was done, a separate heading may be used. This could be important to assess later a quality of investigation. It may include reference to a specific ISO or internationally accepted laboratory standard procedure.

7. Diagrams. A separate heading in the Procedure section devoted to diagrams or photographs emphasises the importance of presenting this information in a clear, concise form rather than written form.

8. Measurements or Results emphasize the importance of reporting specific readings or other observations as they were taken and to record results or outcomes with dates and signatures in order to provide evidence for possible future filing of a patent or intellectual property protection. In this step you take measurements, produce tables and give a relevant sample calculation of how you obtained the final results.

9. Graphs emphasize the need to present an overall summary of the results in a visual form. Data would be presented in tables whereas Graphs would show the relationships between the data and possible trends in a clear, easily read form.

10. Discussion or Results - It is one of the most important parts of the report as here you explain, analyse and interpret the results leading to conclusion. It shows the writer's understanding of the concepts behind the data. If any differences with the Hypothesis or Objectives occurred explain the reason.

11. Conclusion - The purpose of the Conclusion is to discuss questions arising from the report and make suggestions for further work.

12. References - Here you state the information that has been obtained from textbooks, reference books, articles, investigations, etc. and where these sources of information are mentioned in the text. These references are used as a source of information for background theory, previous findings on which this work is based, laboratory procedures, etc. The References are listed numerically at the end of the report to enable a reader to consult these works for further details.

Wacek Kijewski is the author of stimulating and entertaining resource material on experimental science: "SI Units, Conversion and Measurement Skills" (the 2011 edition, IBN 0629340584, 190pp, USD193). The book is recommended for students and lecturers of science and engineering courses. Visit website: http://www.wacek.co.za/ and http://www.wacek.co.za/ Read seven reviews: UNESCO, UK, South Africa, Botswana, United States, Hungary. His other ezines:"The Travellers Temperature Tips", "Is IQ a Metric Unit of Intelligence and...Stupidity", "Al-Gebra and Illuminati Links Discovered", "How to Measure Cultural Differences in Metric Units", "The Traveller's Temperature Predicaments (2) and "How the Metric System was Introduced in Africa".

Article Source: http://EzineArticles.com/?expert=Wacek_Kijewski

Wacek Kijewski - EzineArticles Expert Author

Wednesday, 23 March 2011

Guidance On The Stop Of World Prophecies On 2012

many people have been talking about the way in which the world will come to a close in 2012. Everyone wants to discover if December 21 2012 is the final day of the earth. The rumor has already been spreading since a very long time ago. The Mayan's prediction is accountable for the rumor. According to the rumour, the earth will experience dramatic changes in 2012. The change will be catastrophic to all of the living creatures on earth. It is widely assumed that Planet X will return to the orbit on the solar system.

The return of Planet X has devastating effect on the solar system as well as other planets including Neptune, Jupiter and Earth. Planet X will cross into the orbits together with Jupiter. There is a probability that Jupiter will turn into a tiny sun at this time. As Niburu made its passageway between the earth and sun, everyone will be well placed to see the sun. Due to this phenomena, folks will be able to see 2 suns in the sky on May of 2011.

The solar flares that happen in the 11 year cycle will reach the top in 2012. The strong solar flares will affect the magnetic field of the earth and cause devastation. On the 21st of December 2012, the sun and earth will line up in the same line at the galastic equater. It is going to be the shortest day of the year. The winter solstice only occurs once each 25,800 years. The cosmic events are signals that the world will come to and end. As a consequence of the solar flares, a lot of disaster will happen and destroy the face of entire earth. This is the conclusion of scientists though there isn't any proof that it'll happen. Scientists are just making rumination primarily based on the Mayan Long Count Calendar. The Mayan Long Count Calendar provides info the astronomical system.

Scientists estimated that about six billion of people will die from catastrophic disasters. Many films have been filmed about the armageddon including End Game by Alex Jones and End Clock: Nostradamus 2012. The flick operation end game provide insights about the way the world will have one regime. After the world government is prepared it'll be simple to destroy eighty percent of the population in the world. The elites will continue to live with assistance from advanced technology. The movie End Clock: Nostradamus 2012 provides explanation about the prophecies of Nostradamus.

Nobody knows whether the world will end on December 21 2012. There are some scientists that speculate the world will end on Nov 2011. There is not any definitive answer until the day arrives. Just as the Mayans claim, it won't be the time for the world to come to an end. Instead, it's going to be the start of a fresh start. Some say that the world will enter into a time of the Age of Aquarius. Fast development in advanced technologies, revolutions and appearance of peculiar cultures such as hippies and rock music are signs that the Age of Aquarius is near.

Nutraceutical Sciences - What Are They?

Understanding the technical jargon that is used by some industries can often be quite difficult for a lay person however this doesn't mean that intrinsically they are hard to understand. The food supplement industry alongside other forms of similar products are collectively known in industry as the nutraceutical sciences, the etymology of which stems from two base words in the English language; nutrient and pharmaceutical.

Nutraceutical sciences are primarily concerned with what is often billed as a 'grey' area in international regulatory circles. This area of industry creates food supplements and anything that is taken as food to contribute to a person's diet, but which has been scientifically prepared to provide specific benefits based on laboratory research.

For example, cod liver oil has known benefits for persons who are lacking in Vitamin A and Vitamin D, whilst providing excellent levels of omega -3 fatty acids. Whilst these are all compounds that can be found in everyday foods, they are included in one product (cod liver oil) that is either concentrated and packaged in easily digestible form by nutraceutical manufacturers. This means that the person taking the product will achieve a high concentration dose of the essential compounds that could help them with their condition.

Other examples of products that are made by the nutraceutical sciences can include body-building supplements, bath and personal care, sexual health and sports nutrition. There are also many other categories of products and for most people, daily contact and even use of products manufactured by the nutraceutical sciences industry is normal in the modern Western world.

The West's obsession with the products that the nutraceutical sciences produce goes someway to understanding their success however, their use in the West is dependent on other forces such as marketing and public awareness of ailments, illnesses and the reasons behind them. This is never truer than today when scientific knowledge helps guide the industry into helping keep people healthier and living longer.