Saturday, March 31, 2007

Innovative Technologies and the Consumer Industries

The possible risks, of using any consumer product on human health, needs to be taken in to consideration by the manufacturers. Especially, those, who are in consumer products like medicine, cosmetics and food gradients etc are to be careful. This needs the true attention when the product is based on a new technology. Unless the impact of the, use of the new technology in product, on human health is studied properly no product based on that technology should enter into the market.

Advertisements for the consumer products are not sufficient even if they come through the famous personalities and celebrities from sports, movie etc. People are simply attracted to use the product and later the blame starts. Proper test proofs can only convince the public. As the people are more concerned of their health than anything those who provide test proofs to the public may win in the market. Often companies do marketing of their product with labeling the new technology usage and it is not clear to the consumer that for what this technology has been applied in the product. How dangerous their product is to human health is not known. Because many time the companies are concerned with the use of new technology and labeling the product with it before the technology impact is studied substantially.

If there is no clear publishing of the test proofs to the public, about the harms on human health, there will be arguments and confusions in the consumer groups. We have already seen some examples about Coca Cola in India. There are some discussions and arguments in the Internet about the nano technology based consumer products.

Particularly the developing countries have to have the sufficient and proper laws to check if there are any adverse affects on human health. Before the break through and innovative technologies are applied for consumer products, manufacturers should be forced to do the research on the long-term affects of those technologies on human health in advance. The regulations should be in force to insist for researching on impacts of products based on new technologies and publishing it with proof prior to applying it to the human consumer products.

Read for details.

Sunday, March 25, 2007

Minimize the Client and Server transactions on the Internet

Whenever any website is visited the web browser opens the website’s home page initially. The home page in website serve as the introductory page for the entire website. It provides the links to other pages for detailed specific information. These links can be in the form tabs, buttons in the menu or any hyper linked texts and used to open the different pages. Usually the information itself is broken down in to different pages so that each page serves a different purpose and provides the classified information. The home page provides the hyper links or tab links through menus to other pages.

Web browser at the client machine processes the URL and opens the home page first and waits for the user to click for any links on it. The pages are loaded from the web server on demand whenever user clicks the hyper links. This saves the Internet bandwidth by loading only the require pages.

Internet bandwidth always becomes inadequate even if the bandwidth capacity is improved. This is due to the fact that there is always overloaded Internet traffic in the form with text, photos, video and voice. So the bandwidth needs to be carefully utilized.

Current strategy used in web browser is loading the page based on user requirement as discussed above. As the client machine keeps on requesting the pages, the page is repeatedly fetched from the web server. Distributed servers have increased the speed of getting the page. Even if the nearest servers are chosen based on the client’s geographic information the number of transactions are not minimized. The web browsers were designed earlier to get only the required information. The reasons may be the costly hardware like RAM and the secondary storage devices at client side.

Each time the page is refreshed or the hyper link is clicked there are many processes happen to get the page from the server. At minimum following are the tasks that need to be completed to get a requested page (Assuming that the cache is not set for the pages to stay locally).

1. At the client side web browser sends the request to server.
2. Web server needs to process the document search operation.
3. The searched document is loaded on to the Internet and sent to the client machine.
4. Client machine receives the document and refreshes the screen with the new page.

This happens every time the page is refreshed. If there are two Internet explorers open on the same machine for the same user login both act independently. When same URL is opened in these Internet explorers, both initiate the http request for that page and get the pages from the server.

Can the web browser behave smart when communicating to the web server?

If the transaction between the client and the server is minimized the load on the Internet is going to be minimized definitely.

What are the strategies to minimize the transactions between the client and server? It depends on the type of transaction. For the transactions where the stepwise authentication is required at the server side it cannot be minimized. But in any other types of transactions it should be minimized. Web browsers should determine the type of transaction happening and act accordingly.

1. Whenever the web page has a descriptive material user will definitely click the hyper links embedded in between the paragraph. The web browser can get the web pages for all the links in that paragraph when the page is downloaded for the first time. This is like getting all materials required instead of getting it whenever is required. This protocol can be applied when the server is not nearer to the client and the client and server communication is costly.
2. When the home page is downloaded from the server the server can send the information to the client about when next update can happen to that page. So that the client can cache the page locally depends on this time length. Some web pages frequently updated can follow some protocol to set their page update time.

Any comments please…

Program Structures and properties

1. C program Structure and properties

Structure:

void main(void)
{
statement 1;
statement 2;
.
.
.
statement n;
return;
}

C program will have the above typical program structures. Every C program will have at least main function. Other functions, if present, are called from this main. The C style programming has the following features.

Properties:

Procedural Programming Technique -
C program logic is built sequentially. The entire program flow is sequential from start to end like a procedure. There may be multiple threads and exits in between but the logic flows from start to end like a process.
Modular Programming -
C program is always a collection of function modules. The main function calls other functions. Entire main function can be a sequence of function calls where the entire process is broken down into modules.
Top Down Approach -
The programming approach used in C language is top down means; the program logic is built by making the functions calls assuming that the functions are present. Once the higher-level logic is built then these functions are implemented. While building the higher-level logic one need not bother about the lower level details of the functions to be implemented. Programmer can just assume the availability these modules and use them.

This Section will be continued...
Readers please leave your comments and any more expectations.

Friday, March 23, 2007

Compilers can be Intelligent

Compiler is a tool used by developers to produce the binary image of the source code. Developer uses this tool frequently while the code is being developed. There are compilers for all languages except for some languages, where the interpreters execute the program.

The current compilers do preprocessing for cleaning the code and handling the various compiler directives. Then the result is subjected to lexical and syntax analysis where the typo errors and language related errors are handled. After this there will be code generation and code optimization.

The current compilers have configuration options for a particular compilation set up needs. Every time the code is compiled the same above said steps are done. In some cases the incremental compilation is done to act only on the recently changed code. To develop a small program couple of times the compilation happens in average. In this process compiler does not use the information generated by the profiler to act intelligent. Compiler can learn about the program and about its data objects. This information can be used to make compiler act intelligently over that program. But compilers do not use any information about the program, which makes the successive compilations easier for the same program.

Compilers need to become like expert systems. They can obtain the expertness while compiling the programs. As, typically the program development involves couple of rounds of compilation the learning process for the compilation need not be the separate phase.

What are the areas where the compilers can behave as intelligent tools?

· Compiler can declare any local variable if its declaration is missing and indicate the user about the same in the form of warning.

· Compiler can declare any global variable if its declaration is missing and indicate the user in the form of warning.

· Compiler can free the dynamic memory allocated inside a function or block if user does not free the memory by mistake and indicate the same to the user in the form of warning.

· Compiler can include the header file as appropriate corresponding to a used library instead of throwing the errors.


These abilities of the compiler speed up the program development task and thereby reduce the program development life cycle time.

Booting technique for next generation computers

Booting is a process to bring the system up for working. This process checks the system components after power on and loads the operating system from the secondary storage device. This process involves many predefined tasks, which are carried out every time the system is switched on. Booting starts with the POST step followed by bootstrapping process.

The bootstrapping process begins with executing the software in ROM at a predefined address. This BIOS software contains a small functionality to search for devices eligible to participate in booting, and loads a small program from a boot sector. This small program, called as bootstrap loader or boot loader loads the operating system into memory and passes the control to it. This boot sector will be on any storage device.

This booting process takes a while depends on the machine configuration and the operating system. The total booting time is divided into three stages in cold booting (boot from shut down state) process.

· BIOS POST - the time for the power-on self test (POST)
· Pre-Logon - the time from BIOS POST handoff to the Windows Logon screen
· Post-Logon - the time from closing the Logon screen to a usable Start menu

These 3 time units make the total booting time. Users are always concerned about the booting time the system takes. All expect the system to be ready after it is switched on.

In fact we do not use the system’s full capacity and all its devices initially. Why to check the entire RAM in the beginning? Why to verify all the CPUs in the system initially? Why all network cards are to be verified?

The booting process should be such that the logon screen should be made available to the user as soon as he switches on the system. The limited set of hardware can be made available to the user initially. The booting process can be extended at the background to make the full system resources available to the user. Here the hardware and software resources are made available to the user as the user continues to use the system.

This lazy and demand based booting technique for quick system availability can be investigated for the next generation computer systems.

When The Technologies Collide

In the competitive world, there will be arguments for selecting the optimal and efficient solution. The world community naturally selects the better solution, which makes its life better. The same will be true for virtualization. Virtualization lets a single computer run multiple operating systems simultaneously, in compartments called virtual machines. This enables to derive the 100% utilization of the computers. Also in disaster recovery solutions one can move the virtual machines from one server to another in case of any disasters at one location. Plate spin technology enables the disaster recovery solutions in heterogeneous environments through virtualization.

When user community assesses that VMWARE can provide the cost effective solutions for IT industry naturally the Microsoft, which is holding the technology leadership, raise their eyebrows. This is the scenario of collision between the technologies, which are to some extent, not complementary. Win-Win solution needs to be determined in collaboration to move forward. What the user community does in these scenarios. We know how the user community has accepted the Linux OS.

This technology, impacts the Microsoft business. Why Microsoft did not promote this technology for the span of 5 to 6 years over which virtualization is evolved? If it does not promote then some body take the initiative. So this always happens. One has to accept the breakthroughs in technology and promote it quickly so that the leadership can be maintained for generations.

Like the people say "Virtualization is a journey, not a project." But how this journey of VMWARE will be welcome by Microsoft. Whether they go together hand in hand to meet the world's technical requirements? If not how they arrive at Win-Win solution?

Information Inheritance by Technology

We study in the history about the human civilizations, where the different races of human beings had lived on different parts of this planet. Any civilization, in past, was tied to a particular region on this planet. It was influenced by the climates and nature of that region. The civilization was tied to the nature in its food habits, living style and all social and economical interaction was often not going away from that region. The communities in one civilization hardly knew there is another contemporary civilization in some part of the world. The communication was not there across the geography. There was no influence on one civilization by another.

For some reasons civilizations vanished and another came into existence and the cycle repeated many times. We can get few details about them. Also the civilizations have left their footprints on many ways, like left details about them on walls, coins they used for trading, etc. But any detail we get was only about the kingdoms, like who ruled the region and some details about the nobles lived there.

When we compare the current civilization we are living in we are having the one civilization of human being, which has spread on the planet entirely. The whole planet is diminished into a village. People exchange their ideas across the planet, influence their views and opinions on others whom they have never seen often. We live together and we think together for the better future. We see the unity in all diversified fields.

What made this possible? Was it a dream of somebody in the past? Or it is just a result of some thing, which evolved together over time?

The reason behind this is the Internet. That has made all these possible. Internet as brought the people together. One writes for another to read and one sings for another to listen. Both have never met each other. We call this as an Internet civilization.

Information Inheritance

Internet not only brought the people together, but in process of doing that, it has generated the enormous amount of data. The data and the information, which has been generated by the billions of people, over time, need to be managed well. How the future generation makes use of this data is the question. This data resides in the storage disks and has been spread over the planet. This data may be related to any industry statistics or any one’s thoughts or any picture, videos, movies and so on and so on. Great effort has been put to generate this data. This data represents the current civilization. What way this data needs to be presented to the coming generations? How they get the benefits from this enormous amount of data?

Is it better if we sort it out and prepare the Internet library and store its multiple copies so that every one accesses the same? Everyone is thinking to shrink the technology so that everything happens in a tiny model be it a software or hardware. So who will drive this?

Who is extending the life of Moore's law

It appeared almost like reaching an end for one to put some more transistors into the chip, after the span of 4 decades. There were alternatives approaches found to increase the processing capacity of a chip or to utilize the chip power to 100% by hyperthreading etc. There were many approaches to get more computing ability by a system with multiple processors together. Also there were good inventions in Nano technology to increase the transistor density.
But there was a feeling like the Moore’s law to increase the transistor density over time will fail as there is no substantial invention for any kind of new material which can make chip manufacturers life easier.
HP took a good initiation and made a good invention. This invention may extend Moore’s law further. Moore's Law was a prediction by Intel co-founder Gordon Moore in the 1960s. He basically said that roughly ever two years the number of transistors on an integrated circuit will double in speed and capacity for the same amount of money. Since then, that statement has held true. Processing power and speed have exploded over the years.
But there is one natural and real concept with the Moore’s law. We can say that if we can follow the law some how, the way Moore put it, eventually, we will be able to work at the atom level. This may be the end of the Moore’s law too.

click here for details..

Thin-Laptop over the Internet - Can be possible?

The client-server architecture concept is used all over the Internet. The computers are found at all work points, which connect to remote server and gets the data from it. These client machines serve as the interface to the user, which, connect to the server and provide the data to the user. These machines get the database and display it for the user in the required format. No data will be stored at their point.

Everywhere network is mandatory and there is no system without network. We cannot imagine the isolated machine without network. Here comes the idea of new thin-laptop to make use of the available network connection. Thin-Laptop is a new kind of handy Laptop capable of doing everything. The only difference is, this Thin-laptop has no motherboard. It can load the editor and edit the applications. It can open the integrated development environments for developing applications. It can compile and execute and user can browse the Internet on it. It can play the music and video. But all computation happens in remote station. This laptop will have the necessary hardware to connect to remote machine. It can have I/O devices like Mouse, CD ROM drive, USB drive, and floppy drive except hard disk. All are handled through the embedded processor locally.

Though it looks similar to dumb terminal connected to some mainframe computer but it is not. It is not a full pledged machine to do all jobs on its own. All the compilations and executions occur in remote computer. It gets the results from remote computer and displays to the user. But for the user it gives the feeling like he is working on local machine. It provides all access facilities to user locally and does all computations on remote machine. It opens the remote terminal screen for the user and enables him to do everything over network.

General Features of the concept

1. This does not follow either dumb I/O terminal concept or Client server architecture concept. This is a hybrid product of both concepts.
2. No OS is installed on the machine. Embedded processor will do all the limited tasks.
3. Network connection is mandatory to connect to the server. The machine does not work independently.
4. This can work both on battery power and direct power supply like laptop.

Advantages of this concept

1. This can be a handy and new product, which can be carried to any place where the network is available.
2. Users working in a computer center, industries, banks and educational institutions can have their own Thin-laptops on their table connected to the server. This is a more useful machine in all places where people work with the databases on remote servers.
3. If these machines are supported by wireless then users can use them anywhere in wireless zone.
4. More suitable model for the developing countries as it is economical to buy.

This techonology can be used as cloud computing or not. Readers please comment

Smart Novel Editor - A feasible tool for embedded program development

Traditional compilers for programming languages start the compilation process with preprocessing step followed by lexical analysis, syntax analysis, intermediate code generation, code generation and code optimization tasks. This is a translation process, which takes the source code and produces the machine code. The steps in the beginning of compilation process are language oriented and the later steps are machine oriented. The former stages of compilation generate the warnings and errors related to types and usage of variables.

Data flow graph can help the programmer to understand the data usage in the program. It clearly shows the data flow through various statements and the data dependencies of the code fragments. It helps in reducing the logical errors. But this needs to be done as a separate step manually, before coding starts. Also programmer can know about the logical errors while testing the program.

Difficulties faced by the programmer today

One of the problems faced by the programmer, while coding the program is he will not know about how the variable was updated earlier. Programmer guesses the data values for the variables based on his previous logic and builds the program logic further. But he cannot imagine the current data values for the variables. He expects some value and builds the logic. He commits mistake when he uses the variables without having full knowledge of how those variables were updated previously.

What are the abilities of the currently available editors?

There are many development environments where the editor brings the tool tips and pop-up windows about the function signatures, class members. These direct the programmer in choosing the right class members, right variables and help him to build the statement syntactically correct. But they do not reveal anything about how the variables were changed previously. The information to help the programmer to build the logic correctly while typing the program will be advantageous. It reduces the program development life cycle time by reducing the logical errors on the first front.

What additional feature of editor may help the programmer?

If the editor pops up the list for recent previous statements, which update the variables in hand, programmer understands the variables clearly. This will be the proper information for the programmer to build the logic correctly. The editor helps the programmer to use the variables in appropriate way while typing the program.

Also this editor helps the programmer to produce the ‘First Time Right’ Code. Developing ‘first time right’ code means while typing the program itself the programmer corrects the logical errors.

Features of the novel editor

This editor supports the ‘DataUpdate’ feature. This intelligent feature is supported in the editor as user option. If this option is turned off the editor behaves as the normal editor in the development environment. The intelligence is provided to the editor by the backend DLL. The DLL is a programming language dependent. It behaves like a preprocessor and compiler for the language being used. But it will not produce the machine code. The DLL is programming language syntax aware. The two components are editor and the backend DLL. Editor is responsible for the display and the formatting functionalities and serves as the GUI.

The DLL performs the following functions online as the programmer types the program.

a) Preprocessing
b) Lexical Analysis
c) Syntax Analysis
d) Providing the previous update details for any variable


The backend DLL keeps the database of statements and variables. It shows how the particular variable was updated earlier. As the programmer types the statements and uses the variable it will provide the list of statements updating that variable. This aids the programmer to take valid decisions while framing the logic.

Whenever the programmer types the variable name a menu pops up below the variable position. It shows the recent statements modifying that variable. If there are many paths to the statement being typed, then it will show all the alternative list of statements.

Where it will be more useful?

For embedded processors most of the software development happens with the help of cross compilers. For any embedded software development environment where C syntax like language is used we can use this kind of intelligent editor. Here reducing the logical errors while the code is under development can shorten the software development life cycle. Otherwise the errors will be known when the code is tested on the simulator.

Write First Computer Program

How to start writing programs ?

1. Identify what are the inputs required.
2. Identify what results you need.
3. Identify the steps you do manually to get that result.
4. Write the steps using any language you know.
5. Translate the steps to the programming language you need.

Exploit the History

History is the main source of knowledge with proofs. As it talks about the events happened in the past it gives us the insight of the causes of events and their results. These are the two things that is needed to make a good decisions in todays life. In each and every area history can be exploited to take good decisions and to make good progress in that area in future. It depicts about the reality and significance of various actions and reactions in that area.

Evolution of Microprocessors

What is Microprocessor?

The simple definition of microprocessor, as per its functionality is that it is a tiny fabricated silicon chip, known as programmable micro controller, which is able to execute the predefined set of instruction and to address the predefined range of memory space along with the few I/O devices such as keyboard and LCD display.

Evolution of Microprocessors

The evolution of microprocessor started with the world’s first microprocessor by Intel Corporation, the Intel 4004 in 15th Nov. 1971. This is a 4-bit microprocessor with memory addressing capacity of 4096 4-bit memory locations. Later came in market 8008 and followed by series of processors over couple of decades 8080, 8085, 8086, 80186 and 80286. Finally came to market the 32-bit processor, 80386. This processor has been ruling the computer world still.
So we saw the 8-bit processors initially and then 16-bit processors for couple of years. Then came 32-bit processor architecture and it has been in use for more than a decade. The capacity of these processors to address the memory range and the FLOPS (Floating Point Operations per Second) was enough to handle the real world computation requirements needed by that time.
But as the human generation found the ease of using computers and its advantage in industry automation, the demand started increasing for computation ability. These microprocessors were found incapable in processing ability and also in accessing the bigger memory space. The increase in demand for high computing ability and bigger memory range by applications, put the continuous pressure on chip manufacturers such is Intel, AMD etc, to invent for the new technology. To meet the computing requirement of the real world, chip manufacturers released the next new version of microprocessor with 64-bit architecture. Intel and AMD are the 2 giants for providing this 64-bit microprocessor architecture to the computing world.
With this 64-bit architecture the memory range and the computing capacity both increased significantly. This processor can address 16 Tera bytes of memory (1 Tera bytes = 1024 GB) unlike the only 4GB for 32-bit case. So each application gets lot of memory. It was only 4GB for applications running on 32-bit microprocessors. Processing capacity is doubled as the data bus is doubled in width from 32 to 64 bits.

Can these processors grow the same way in future? Does Moore’s law answer this?

Save Water and Save Life

Save Water and Save Life