Search This Blog

Powered By Blogger

Sunday, August 29, 2010

cs case study

How HCI (Human-Computer Interaction) relates to the following fields?

HCI relates in Psychology to understand the role of mental functions in each person and its social behavior, while it is also explore the underlying physiological and neurological processes.

HCI relates in Computer Science through the study of the theoretical foundations of information and computation, and their practical techniques for their implementation and application in computer systems.

HCI relates in Language of CS and linguistics concerned with the interactions between computers and human languages, By extension the term also refers to the type of thought process, which creates and uses language, and it is also essential to both meanings is the systematic creation, maintenance and use of systems of symbols, which dynamically reference concepts and assemble according to structured patterns to form expressions and communicate meaning.

HCI relates in Sociology in which an interdisciplinary field focused on the interactions between the users and computer systems, including the user interface and the underlying processes which produce the interactions, it is now on understanding that the relationships among users' goals and objectives, their personal capabilities, the social environment, and the designed artifacts with which they interact. human-computer interaction is also concerned with the development process used to create the interactive system and its value for the users.

HCI relates in Ethnography in which it is often employed for gathering empirical data on human societies/cultures. Data collection is also often done through participant observation, interviews, questionnaires, etc. Ethnography aims to describe the nature of those who are studied through writing in the same way.


HCI relates in semiotics and branding where in The organization as a whole is looked upon as an informal IS, where
the values, beliefs and behaviour of individuals are important. The informal layer aggregates
the formal layer, which is the way individual actions and business processes should be carried
out according to rules in the organisation. It is also a part of

the formal system that is automated.






HCI relates in Design where in each of us provides comprehensive, state-of-the-art coverage of the field, and provides principles and skills for designing any technology through the use of many interesting and state of the art examples.

HCI relates in Engineering which is a practical usability engineering process that can be incorporated into the software product development process to ensure the usability of interactive computer products is presented. It has also basic elements in the usability engineering model are empirical user testing and prototyping, combined with iterative design.

HCI relates in Ergonomics and human factors in which provides a theoretical perspective on human factors and ergonomics (HFE), defined as a unique and independent discipline that focuses on the nature of human-artefact interactions, viewed from the unified perspective of the science, engineering, design, technology. It include a variety of natural and artificial products, processes and living environments.

Sunday, June 6, 2010

To the man i loved before who inspired me to wrote dz poem...."A broken Promise"

the first time i saw you,
i dont know why i hate you.
until time come i've got to know you,
and you've make me feel blue.

I remember the month of may,
when you said "i love you to me".
my heart jump with joy,
and couldnt find the words to say.

You've promise you'd never leave me,
'til i say "yes" to what you say.
you've prove your love to me,
and make me feel so lucky.

But unexpected test comes our way,
yet, i did my best for you to stay.
and now that your gone away,
i struggle to leave my life each day...

thank god for having my family w/me,
and making me feel so loved & lucky.
surrounding me when i need someone to lean on,
wipe my tears and encourage me to move on...

"to my beloved nanay edith"..i wrote this for you..

Since you'd give birth to me
You'd never fail to let me see,
how precious i am to you
and letting me feel special & love by you.

i witness how you struggle to raise me,
& making me feel so loved & lucky.
you may not perfect on others eyes,
but for me,you'd been greater than perfect ..

i promise to treasure the love you shared,
with confident that you'd been w/us forever..
Guiding me as i faced this life ahead of me,
and keeping me safe all the way..

i thank god for the short time we shared,
together with pain,joy and laughters.
i may not show how much i love you,
i hope you feel it as i grow up w/you.

may the lord prepare your place w/him,
& find happiness w/the angels above & have fun w/ them.
may you find rest w/him peacefully,
and whisper us about gods beautiful story...
i missed you nanay....i love you soooo.... muchhhhhhd...

Saturday, May 22, 2010

show all running progs...

Process.vbs
' Free Sample VBScript to discover which processes are running
' Author Guy Thomas http://computerperformance.co.uk/
' Version 1.4 - December 2005
' -------------------------------------------------------'
Option Explicit
Dim objWMIService, objProcess, colProcess
Dim strComputer, strList

strComputer = "."

Set objWMIService = GetObject("winmgmts:" _
& "{impersonationLevel=impersonate}!\\" _
& strComputer & "\root\cimv2")

Set colProcess = objWMIService.ExecQuery _
("Select * from Win32_Process")

For Each objProcess in colProcess
strList = strList & vbCr & _
objProcess.Name
Next

WSCript.Echo strList
WScript.Quit

' End of List Process Example VBScript

detect if a program is running ....

Set WshShell = WScript.CreateObject ("WScript.Shell")
Set colProcessList = GetObject("Winmgmts:").ExecQuery ("Select * from Win32_Process")
'==============================================================================================
file_name = inputbox("What file to execute?", "File name?", "c:\boot.ini")+".exe"

For Each objProcess in colProcessList

If objProcess.name = file_name then
vFound = True
End if
Next
If vFound = True then
Msgbox("Found")

Else
Msgbox("Not Found")
End If

execute a progam

sub shell(cmd)
' Run a command as if you were running from the command line
dim objShell
Set objShell = WScript.CreateObject( "WScript.Shell" )
objShell.Run(cmd)
Set objShell = Nothing
end sub

file_name = inputbox("What file to execute?", "File name?", "c:\boot.ini")

shell file_name

project....

sub shell(cmd)
' Run a command as if you were running from the command line
dim objShell
Set objShell = WScript.CreateObject( "WScript.Shell" )
objShell.Run(cmd)
Set objShell = Nothing
end sub

file_name = inputbox("What file to execute?", "File name?", "c:\boot.ini")

shell file_name

Friday, May 21, 2010

REPORT

7.4 Magnetic Disk Technology
Before the advent of disk drive technology, sequential media such as punched cards and magnetic or paper tape were the only kinds of durable storage available. If the data that someone needed were written at the trailing end of a tape reel, the entire volume had to be read-one record at a time. Sluggish readers and small system memories made this an excruciatingly slow process. Tape and cards were not only slow, but they also degraded rather quickly due to the physical and environmental stresses to which they were exposed. Paper tape often stretched and broke. Open reel magnetic tape not only stretched, but also was subject to mishandling by operators. Cards could tear, get lost, and warp.
In this technological context, it is easy to see how IBM fundamentally changed the computer world in 1956 when it deployed the first commercial disk-based computer called the Random Access Method of Accounting and Control computer, or RAMAC, for short. By today's standards, the disk in this early machine was incomprehensibly huge and slow. Each disk platter was 24 inches in diameter, containing only 50,000 7-bit characters of data on each surface. Fifty two-sided platters were mounted on a spindle that was housed in a flashy glass enclosure about the size of a small garden shed. The total storage capacity per spindle was a mere 5 million characters and it took one full second, on average, to access data on the disk. The drive weighed more than a ton and cost millions of dollars to lease. (One could not buy equipment from IBM in those days.)
By contrast, in early 2000, IBM began marketing a high-capacity disk drive for use in palmtop computers and digital cameras. These disks are 1 inch in diameter, hold 1 gigabyte (GB) of data, and provide an average access time of 15 milliseconds. The drive weighs less than an ounce and retails for less than $300!
Disk drives are called random (sometimes direct) access devices because each unit of storage, the sector, has a unique address that can be accessed independently of the sectors around it. As shown in Figure 7.9, sectors are divisions of concentric circles called tracks. On most systems, every track contains exactly the same number of sectors. Each sector contains the same number of bytes. Hence, the data is written more "densely" at the center of the disk than at the outer edge. Some manufacturers pack more bytes onto their disks by making all sectors approximately the same size, placing more sectors on the outer tracks than on the inner tracks. This is called zoned-bit recording. Zoned-bit recording is rarely used because it requires more sophisticated drive control electronics than traditional systems.

Figure 7.9: Disk Sectors Showing Intersector Gaps and Logical Sector Format
Disk tracks are consecutively numbered starting with track 0 at the outermost edge of the disk. Sectors, however, may not be in consecutive order around the perimeter of a track. They sometimes "skip around" to allow time for the drive circuitry to process the contents of a sector prior to reading the next sector. This is called interleaving. Interleaving varies according to the speed of rotation of the disk as well as the speed of the disk circuitry and its buffers. Most of today's fixed disk drives read disks a track at a time, not a sector at a time, so interleaving is now becoming less common.
7.4.1 Rigid Disk Drives
Rigid ("hard" or fixed) disks contain control circuitry and one or more metal or glass disks called platters to which a thin film of magnetizable material is bonded. Disk platters are stacked on a spindle, which is turned by a motor located within the drive housing. Disks can rotate as fast as 15,000 revolutions per minute (rpm), the most common speeds being 5400 rpm and 7200 rpm. Read/write heads are typically mounted on a rotating actuator arm that is positioned in its proper place by magnetic fields induced in coils surrounding the axis of the actuator arm (see Figure 7.10). When the actuator is energized, the entire comb of read-write heads moves toward or away from the center of the disk.

Figure 7.10: Rigid Disk Actuator (with Read/Write Heads) and Disk Platters
Despite continual improvements in magnetic disk technology, it is still impossible to mass-produce a completely error-free medium. Although the probability of error is small, errors must, nevertheless, be expected. Two mechanisms are used to reduce errors on the surface of the disk: special coding of the data itself and error-correcting algorithms. (This special coding and some error-correcting codes were discussed in Chapter 2.) These tasks are handled by circuits built into the disk controller hardware. Other circuits in the disk controller take care of head positioning and disk timing.
In a stack of disk platters, all of the tracks directly above and below each other form a cylinder. A comb of read-write heads accesses one cylinder at a time. Cylinders describe circular areas on each disk.
Typically, there is one read-write head per usable surface of the disk. (Older disks-particularly removable disks-did not use the top surface of the top platter or the bottom surface of the bottom platter.) Fixed disk heads never touch the surface of the disk. Instead, they float above the disk surface on a cushion of air only a few microns thick. When the disk is powered down, the heads retreat to a safe place. This is called parking the heads. If a read-write head were to touch the surface of the disk, the disk would become unusable. This condition is known as a head crash.
Head crashes were common during the early years of disk storage. First-generation disk drive mechanical and electronic components were costly with respect to the price of disk platters. To provide the most storage for the least money, computer manufacturers made disk drives with removable disks called disk packs. When the drive housing was opened, airborne impurities, such as dust and water vapor, would enter the drive housing. Consequently, large head-to-disk clearances were required to prevent these impurities from causing head crashes. (Despite these large head-to-disk clearances, frequent crashes persisted, with some companies experiencing as much downtime as uptime.) The price paid for the large head-to-disk clearance was substantially lower data density. The greater the distance between the head and the disk, the stronger the charge in the flux coating of the disk must be for the data to be readable. Stronger magnetic charges require more particles to participate in a flux transition, resulting in lower data density for the drive.
Eventually, cost reductions in controller circuitry and mechanical components permitted widespread use of sealed disk units. IBM invented this technology, which was developed under the code name "Winchester." Winchester soon became a generic term for any sealed disk unit. Today, with removable-pack drives no longer being manufactured, we have little need to make the distinction. Sealed drives permit closer head-to-disk clearances, increased data densities, and faster rotational speeds. These factors constitute the performance characteristics of a rigid disk drive.
Seek time is the time it takes for a disk arm to position itself over the required track. Seek time does not include the time that it takes for the head to read the disk directory. The disk directory maps logical file information, for example, my_story.doc, to a physical sector address, such as cylinder 7, surface 3, sector 72. Some high-performance disk drives practically eliminate seek time by providing a read/write head for each track of each usable surface of the disk. With no movable arms in the system, the only delays in accessing data are caused by rotational delay.
Rotational delay is the time that it takes for the required sector to position itself under a read/write head. The sum of the rotational delay and seek time is known as the access time. If we add to the access time the time that it takes to actually read the data from the disk, we get a quantity known as transfer time, which, of course, varies depending on how much data is read. Latency is a direct function of rotational speed. It is a measure of the amount of time it takes for the desired sector to move beneath the read/write head after the disk arm has positioned itself over the desired track. Usually cited as an average, it is calculated as:

To help you appreciate how all of this terminology fits together, we have provided a typical disk specification as Figure 7.11.

Figure 7.11: A Typical Rigid Disk Specification as Provided by Disk Drive Manufacturers
Because the disk directory must be read prior to every data read or write operation, the location of the directory can have a significant impact on the overall performance of the disk drive. Outermost tracks have the lowest bit density per areal measure, hence, they are less prone to bit errors than the innermost tracks. To ensure the best reliability, disk directories can be placed at the outermost track, track 0. This means for every access, the arm has to swing out to track 0 and then back to the required data track. Performance therefore suffers from the wide arc made by the access arms.
Improvements in recording technology and error-correction algorithms permit the directory to be placed in the location that gives the best performance: at the middlemost track. This substantially reduces arm movement, giving the best possible throughput. Some, but not all, modern systems take advantage of center track directory placement.
Directory placement is one of the elements of the logical organization of a disk. A disk's logical organization is a function of the operating system that uses it. A major component of this logical organization is the way in which sectors are mapped. Fixed disks contain so many sectors that keeping tabs on each one is infeasible. Consider the disk described in our data sheet. Each track contains 132 sectors. There are 3196 tracks per surface and 5 surfaces on the disk. This means that there are a total of 2,109,360 sectors on the disk. An allocation table listing the status of each sector (the status being recorded in 1 byte) would therefore consume over 2 megabytes of disk space. Not only is this a lot of disk space spent for overhead, but reading this data structure would consume an inordinate amount of time whenever we need to check the status of a sector. (This is a frequently executed task.) For this reason, operating systems address sectors in groups, called blocks or clusters, to make file management simpler. The number of sectors per block determines the size of the allocation table. The smaller the size of the allocation block, the less wasted space there is when a file doesn't fill the entire block; however, smaller block sizes make the allocation tables larger and slower. We will look deeper into the relationship between directories and file allocation structures in our discussion of floppy disks in the next section.
One final comment about the disk specification shown in Figure 7.11: You can see that it also includes estimates of disk reliability under the heading of "Reliability and Maintenance." According to the manufacturer, this particular disk drive is designed to operate for five years and tolerate being stopped and started 50,000 times. Under the same heading, a mean time to failure (MTTF) figure is given as 300,000 hours. Surely this figure cannot be taken to mean that the expected value of the disk life is 300,000 hours-this is just over 34 years if the disk runs continuously. The specification states that the drive is designed to last only five years. This apparent anomaly owes its existence to statistical quality control methods commonly used in the manufacturing industry. Unless the disk is manufactured under a government contract, the exact method used for calculating the MTTF is at the discretion of the manufacturer. Usually the process involves taking random samples from production lines and running the disks under less than ideal conditions for a certain number of hours, typically more than 100. The number of failures are then plotted against probability curves to obtain the resulting MTTF figure. In short, the "Design Life" number is much more credible and understandable.
7.4.2 Flexible (Floppy) Disks
Flexible disks are organized in much the same way as hard disks, with addressable tracks and sectors. They are often called floppy disks because the magnetic coating of the disk resides on a flexible Mylar substrate. The data densities and rotational speeds (300 or 360 RPM) of floppy disks are limited by the fact that floppies cannot be sealed in the same manner as rigid disks. Furthermore, floppy disk read/write heads must touch the magnetic surface of the disk. Friction from the read/write heads causes abrasion of the magnetic coating, with some particles adhering to the read/write heads. Periodically, the heads must be cleaned to remove the particles resulting from this abrasion.
If you have ever closely examined a 3.5" diskette, you have seen the rectangular hole in the metal hub at the center of the diskette. The electromechanics of the disk drive use this hole to determine the location of the first sector, which is on the outermost edge of the disk.
Floppy disks are more uniform than fixed disks in their organization and operation. Consider, for example, the 3.5" 1.44MB DOS/Windows diskette. Each sector of the floppy contains 512 data bytes. There are 18 sectors per track, and 80 tracks per side. Sector 0 is the boot sector of the disk. If the disk is bootable, this sector contains information that enables the system to start from the floppy disk instead of its fixed disk.
Immediately following the boot sector are two identical copies of the file allocation table (FAT). On standard 1.44MB disks, each FAT is nine sectors long. On 1.44MB floppies, a cluster (the addressable unit) consists of one sector, so there is one entry in the FAT for each data sector on the disk.
The disk root directory occupies 14 sectors starting at sector 19. Each root directory entry occupies 32 bytes, within which it stores a file name, the file attributes (archive, hidden, system, and so on), the file's timestamp, the file size, and its starting cluster (sector) number. The starting cluster number points to an entry in the FAT that allows us to follow the chain of sectors spanned by the data file if it occupies more than one cluster.
A FAT is a simple table structure that keeps track of each cluster on the disk with bit patterns indicating whether the cluster is free, reserved, occupied by data, or bad. Because a 1.44MB disk contains 18 x 80 x 2 = 2880 sectors, each FAT entry needs 14 bits, just to point to a cluster. In fact, each FAT entry on a floppy disk is 16 bits wide, so the organization is known as FAT16. If a disk file spans more than one cluster, the first FAT entry for that file also contains a pointer to the next FAT entry for the file. If the FAT entry is the last sector for the file, the "next FAT entry" pointer contains an end of file marker. FAT's linked list organization permits files to be stored on any set of free sectors, regardless of whether they are contiguous.
To make this idea clearer, consider the FAT entries given in Figure 7.12. As stated above, the FAT contains one entry for each cluster on the disk. Let's say that our file occupies four sectors starting with sector 121. When we read this file, the following happens:

Figure 7.12: A File Allocation Table
1. The disk directory is read to find the starting cluster (121). The first cluster is read to retrieve the first part of the file.
2. To find the rest of the file, the FAT entry in location 121 is read, giving the next data cluster of the file and FAT entry (124).
3. Cluster 124 and the FAT entry for cluster 124 are read. The FAT entry points to the next data at sector 126.
4. The data sector 126 and FAT entry 126 are read. The FAT entry points to the next data at sector 122.
5. The data sector 122 and FAT entry 122 are read. Upon seeing the marker for the next data sector, the system knows it has obtained the last sector of the file.
It doesn't take much thought to see the opportunities for performance improvement in the organization of FAT disks. This is why FAT is not used on high-performance, large-scale systems. FAT is still very useful for floppy disks for two reasons. First, performance isn't a big concern for floppies. Second, floppies have standard capacities, unlike fixed disks for which capacity increases are practically a daily event. Thus, the simple FAT structures aren't likely to cause the kinds of problems encountered with FAT16 as disk capacities started commonly exceeding 32 megabytes. Using 16-bit cluster pointers, a 33MB disk must have a cluster size of at least 1KB. As the drive capacity increases, FAT16 sectors get larger, wasting a large amount of disk space when small files do not occupy full clusters. Drives over 2GB require cluster sizes of 64KB!
Various vendors use a number of proprietary schemes to pack higher data densities onto floppy disks. The most popular among these technologies are Zip drives, pioneered by the Iomega Corporation, and several magneto-optical designs that combine the rewritable properties of magnetic storage with the precise read/write head positioning afforded by laser technology. For purposes of high-volume long-term data storage, however, floppy disks are quickly becoming outmoded by the arrival of inexpensive optical storage methods.
COMLAB.ACT.

1.What are the significant parts discussed in the video?
Ans.
As design advances reduced the costs of logic and memory, the programmer's time became more important. Subsequent computer designs emphasized ease of programming, typically using a larger and more intuitive instruction set.most machine-language programming came to be generated by compilers and report generators. The reduced instruction set computer returned full-circle to a simple instruction set and achieving multiple actions in a single instruction cycle, in order to maximize execution speed, though the newer computers had much longer instruction words.

2.evaluate at least one topic discussed in the video?
Ans.
-It share about there course objectives which are;how thus computer works,what are basic principles,how to analyze their performance,how computer are designed and built.
-It also talk about the abstraction to computer and give a brief explanation which s the process or result of generalization by reducing the information content of a concept or an observable phenomenon, typically to retain only information which is relevant for a particular purpose.

3.how important is computer architecture with our daily living?
Ans.
It promote awareness of energy use in everyday life. Extending this approach to a larger architectural and urban scale, was set up to explore the possibilities of design as an intervention into multiple technical, material and social systems—or ecologies. In addition to designing materials, objects, and interfaces. The design of interventions into energy ecologies and the use of design methods become a platform for exposing existing habits and hidden norms as well as for proposing alternative actions and views.
4.Give a commentary reaction w/video..
Ans.
its really gives us more lessons about computer architecture..,which the instructor explain step by step

Thursday, May 20, 2010

John 14:1-4;

"Do not let your hearts be troubled. Trust in God; trust also in me. In my Father's house are many rooms; if it were not so, I would have told you. I am going there to prepare a place for you. And if I go and prepare a place for you, I will come back and take you to be with me that you also may be where I am. You know the way to the place where I am going."
ASSESSMENT:


Anyways...about the the report of group 2...i've learn just few lessons, because most of them did not give a broad explanation about the topic that each of them prepared aside from trina,bryan and delalamon who explain well their report...the rest i find hard to understand,and their voice seems like that they are in their bedroom which makes me feel sooo...sleepy...anyway maybe were the same guys,we did not study and well prepared our report....."i'm sorry guyz...hehe,i'm just telling the truth...peace on earth..")

THATS ALL I THANK U.....

Saturday, April 17, 2010

case study1

CASE STUDY #1:
PAST AND PRESENT TRENDS OF COMPUTER ARCHETICTURE
Past:
The computer started with its large, heavy machines composed of thousands of vacuum tubes.and has a development of the transistor created the next evolution in computer architecture, the microchip which is currently used in the generation of computers. Like its vacuum tube predecessor, this architecture of utilizing transistors, can only go so far. At this rate each switch will eventually become the size of an atom. When this happens the laws of quantum mechanics must be used. A new evolution in computer architecture will need to be developed to handle the unique laws of quantum mechanics. This architecture is already being developed and is called a quantum computer.
Quantum computers work in a rather distinctive way. Instead of using traditional bits, they use quantum bits, or qubits. Qubits are particles that can take on the unique states required for quantum computing. The best way to understand how a quantum computer works is by example. A basic example is to take a register composed of 2 bits. Using a classic register, these two bits can have a value of 0,1,2, or 3. Now using a quantum register with two qubits, the register can have a value of 0, 1, 2 and 3.
Remarkable developments in semiconductor technology enabling the implementation of ideas that were previously beyond the computer architect's grasp. Ideas like multilevel memory hierarchies, pipelining, multiple instruction , and speculative execution are just a few examples of architectural innovations that have become commonplace in high-end computers.
The computer "revolution" has been driven by the remarkable growth in semiconductor technologies. The common denominator has been the constant reduction in the size of electronic and magnetic devices that can be manufactured inexpensively. If we take our time horizon to be the next decade as an appropriate strategic point in the future, we expect to see feature sizes of 0.05 microns and chips with 100 million devices operating at several gigahertz. Technical obstacles need to be overcome for this progress to occur, but it is our expectation that the overwhelming resources in industry and academia that are deployed in the computer industry.
The two major obstacles that we see are the issues of power and memory latency. Low-power design is important because portable computing will become ubiquitous in the next five years. Even in the realm of high-performance computing, power issues are important because power consumption is linearly related to clock frequency.