WILLIAM STALLINGS OPERATING SYSTEMS PDF

adminComment(0)

Operating Systems: Internals and Design Principles, Seventh Edition, by William Stallings. Published by Prentice Hall. Copyright © by Pearson Education. by William Stallings The Operating System as an Extended Machine 4 .5 Handheld The Design of the UNIX Operating System Maurice Bach. Contribute to mohitsshetty/Computer-Engineering-Reference-Books development by creating an account on GitHub.


William Stallings Operating Systems Pdf

Author:ELENI DILAURO
Language:English, French, Portuguese
Country:Rwanda
Genre:Biography
Pages:388
Published (Last):11.09.2016
ISBN:538-9-50846-794-3
ePub File Size:22.80 MB
PDF File Size:9.57 MB
Distribution:Free* [*Registration needed]
Downloads:34457
Uploaded by: RAUL

NOTICE This manual contains solutions to the review questions and homework problems in Operating Systems, Sixth Edition. If you spot an error in a . And Design Operating System Principles Overview by william stallings operating systems: internals and design principles. fundamental to the structure of. Operating Systems: Internals and Design Principles, Sixth Edition UNIX: All of the UNIX material from the book in one PDF document.

The Memory Management Reference A good source of documents and links on all aspects of memory management. Operating System Technical Comparison Includes a substantial amount of information on a variety of operating systems.

Useful tools and tutorials. Operated by the U. Errata sheet : Latest list of errors, updated at most monthly. File name is Errata-OS4e. If you spot any errors, please report them. Pseudocode : All of the algorithms from the book in an easy-to-read Pascal-like pseudocode. Useful tools and tutorials. Operated by the U.

Student Resource Site: Help and advice for the long-suffering, overworked student. Errata sheet: Latest list of errors, updated at most monthly. File name is Errata-OS4e. If you spot any errors, please report them.

Cognitive Psychology

Windows All of the Windows material from the book in one PDF document, for easy reference. All of the algorithms from the book in an easy-to-read Pascal-like pseudocode. All of the algorithms from the book in a Java.

Java Primer: A brief Java primer by Paul Carter of the U. PowerPoint Slides: The "official" set of slides commissioned for use specifically with this book. PDF Slides: Yet another set of slides prepared for use with this book by Dr. Akbar Hussain of Aalborg University. A good set of summary notes suitable for use by students as a study guide. Developed by Sanjiv K. Bhatia of University of Missouri -- St. Course Notes: Both reach line 3 at the same time.

Now, we'll assume both read number[0] and number[1] before either addition takes place. Let p1 complete the line, assigning 1 to number[1], but p0 block before the assignment.

Then p1 gets through the while loop at line 5 and enters the critical section. While in the critical section, it blocks; p0 unblocks, and assigns 1 to number[0] at line 3. It proceeds to the while loop at line 5. Further, the second condition on line 5 is false, so p0 enters the critical section. Now p0 and p1 are both in the critical section, violating mutual exclusion. Note that if the loop is entered and then process j reaches line 3, one of two situations arises.

Either number[j] has the value 0 when the first test is executed, in which case process i moves on to the next process, or number[j] has a non-zero value, in which case at some point number[j] will be greater than number[i] since process i finished executing statement 3 before process j began. Either way, process i will enter the critical section before process j, and when process j reaches the while loop, it will loop at least until process i leaves the critical section.

Matt Bishop, UC Davis 5. The unique feature of this algorithm is that a process need wait no more then N - 1 turns for access. The values of control[i] for process i are interpreted as follows: The value of k reflects whose turn it is to enter the critical section. Entry algorithm: Exit algorithm: If process i finds a process with a nonzero control entry, then k is set to the identifier of that process.

The original paper makes the following observations: First observe that no two processes can be simultaneously processing between their statements L3 and L6. Finally, observe that no single process can be blocked. Before any process having executed its critical section can exit the area protected from simultaneous processing, it must designate as its unique successor the first contending process in the cyclic ordering, assuring the choice of any individual contending process within N — 1 turns.

Original paper: Eisenberg, A. An atomic statement such as exchange will use more resources.

Any process waiting to enter its critical section will thus do so within n — 1 turns. In the definition of Figure 5. With the definition of this problem, you don't have that information readily available.

However, the two versions function the same. We quote the explanation in Reek;s paper. There are two problems. First, because unblocked processes must reenter the mutual exclusion line 10 there is a chance that newly arriving processes at line 5 will beat them into the critical section.

Second, there is a time delay between when the waiting processes are unblocked and when they resume execution and update the counters. The waiting processes must be accounted for as soon as they are unblocked because they might resume execution at any time , but it may be some time before the processes actually do resume and update the counters to reflect this. To illustrate, consider the case where three processes are blocked at line 9. The last active process will unblock them lines as it departs.

But there is no way to predict when these processes will resume executing and update the counters to reflect the fact that they have become active.

If a new process reaches line 6 before the unblocked ones resume, the new one should be blocked. But the status variables have not yet been updated so the new process will gain access to the resource. When the unblocked ones eventually resume execution, they will also begin accessing the resource.

The solution has failed because it has allowed four processes to access the resource together. This forces unblocked processes to recheck whether they can begin using the resource.

This approach is to eliminate the time delay. If the departing process updates the waiting and active counters as it unblocks waiting processes, the counters will accurately reflect the new state of the system before any new processes can get into the mutual exclusion. Because the updating is already done, the unblocked processes need not reenter the critical section at all. Implementing this pattern is easy.

Operating Systems

Identify all of the work that would have been done by an unblocked process and make the unblocking process do it instead. Suppose three processes arrived when the resource was busy, but one of them lost its quantum just before blocking itself at line 9 which is unlikely, but certainly possible.

If a new process arrives before the older ones resume, the new one will decide to block itself. However, it will breeze past the semWait in line 9 without blocking, and when the process that lost its quantum earlier runs it will block itself instead. Indeed, because the unblocking order of semaphores is implementation dependent, the only portable way to ensure that processes proceed in a particular order is to block each on its own semaphore.

The departing process updates the system state on behalf of the processes it unblocks. After you unblock a waiting process, you leave the critical section or block yourself without opening the mutual exclusion.

Operating Systems

The process can therefore safely update the system state on its own. When it is finished, it reopens the mutual exclusion. Newly arriving processes can no longer cut in line because they cannot enter the mutual exclusion until the unblocked process has finished. However, once you have unblocked a process, you must immediately stop accessing the variables protected by the mutual exclusion.

The safest approach is to immediately leave after line 26, the process leaves without opening the mutex or block yourself. Only one waiting process can be unblocked even if several are waiting—to unblock more would violate the mutual exclusion of the status variables. This problem is solved by having the newly unblocked process check whether more processes should be unblocked line If so, it passes the baton to one of them line 15 ; if not, it opens up the mutual exclusion for new arrivals line This pattern synchronizes processes like runners in a relay race.

Item Preview

As each runner finishes her laps, she passes the baton to the next runner. In the synchronization world, being in the mutual exclusion is analogous to having the baton—only one person can have it..

The solution is to move the else line, which appears just before the end line in semWait to just before the end line in semSignal. Thus, the last semSignalB mutex in semWait becomes unconditional and the semSignalB mutex in semSignal becomes conditional. The semaphore s controls access to the critical region and you only want the critical region to include the append or take function.

If the buffer is allowed to contain n entries, then the problem is to distinguish an empty buffer from a full one. Consider a buffer of six slots, with only one entry, as follows: Now suppose that the buffer is one element shy of being full: You could use an auxiliary variable, count, which is incremented and decremented appropriately.

There is an array of message slots that constitutes the buffer. Each process maintains a linked list of slots in the buffer that constitute the mailbox for that process. The message operations can implemented as: This solution is taken from [TANE97]. The synchronization process maintains a counter and a linked list of waiting processes for each semaphore. To do a WAIT or SIGNAL, a process calls the corresponding library procedure, wait or signal, which sends a message to the synchronization process specifying both the operation desired and the semaphore to be used.

When the message arrives, the synchronization process checks the counter to see if the required operation can be completed. If the operation is allowed, the synchronization process sends back an empty message, thus unblocking the caller. If, however, the operation is a WAIT and the semaphore is 0, the synchronization process enters the caller onto the queue and does not send a reply. The result is that the process doing the WAIT is blocked, just as it should be.

Operating System Internals And Design Principles ( 7th Edition)

Later, when a SIGNAL is done, the synchronization process picks one of the processes blocked on the semaphore, either in FIFO order, priority order, or some other order, and sends a reply. Race conditions are avoided here because the synchronization process handles only one request at a time. Only one process may use a resource at a time.

Hold and wait. A process may hold allocated resources while awaiting assignment of others. No preemption.

No resource can be forcibly removed from a process holding it. Circular wait. A closed chain of processes exists, such that each process holds at least one resource needed by the next process in the chain. Alternatively, if a process requests a resource that is currently held by another process, the operating system may preempt the second process and require it to release its resources. If a process has been allocated resources of type R, then it may subsequently request only those resources of types following R in the ordering.

Deadlock avoidance allows the three necessary conditions, but makes judicious choices to assure that the deadlock point is never reached. With deadlock detection, requested resources are granted to processes whenever possible. Only one car can occupy a given quadrant of the intersection at a time.

Hold and wait: No car ever backs up; each car in the intersection waits until the quadrant in front of it is available. No preemption: No car is allowed to force another car out of its way. Circular wait: Each car is waiting for a quadrant of the intersection occupied by another car. Hold-and-wait approach: Require that a car request both quadrants that it needs and blocking the car until both quadrants can be granted.

No preemption approach: Circular-wait approach: The algorithms discussed in the chapter apply to this problem.

Essentially, deadlock is avoided by not granting requests that might lead to deadlock. The problem here again is one of backup. Q acquires B and A, and then releases B and A. When P resumes execution, it will be able to acquire both resources.

Q acquires B and A. P executes and blocks on a request for A. Q releases B and A. Q acquires B and then P acquires and releases A. Q acquires A and then releases B and A. When P resumes execution, it will be able to acquire B.

P acquires A and then Q acquires B. P releases A. Q acquires A and then releases B. P acquires B and then releases B.

P acquires and then releases A. P acquires B. Q executes and blocks on request for B. P releases B. When Q resumes execution, it will be able to acquire both resources. P acquires A and releases A and then acquires and releases B. However, once P releases A, Q can proceed. Once Q releases B, A can proceed. Running the banker's algorithm, we see processes can finish in the order p1, p4, p5, p2, p3.

Change available to 2,0,0,0 and p3's row of "still needs" to 6,5,2,2. Now p1, p4, p5 can finish, but with available now 4,6,9,8 neither p2 nor p3's "still needs" can be satisfied. So it is not safe to grant p3's request. Mark P1; no deadlock detected 6. The resource constraints now become: So P cannot be delayed indefinitely by O. By examining the resource constraints listed in the solution to problem 6. Procedure returns can take place immediately because they only release resources.

Output consumption can take place immediately after output becomes available. Output production can be delayed temporarily until all previous output has been consumed and made at least reso pages available for further output. Input consumption can take place immediately after input becomes available.

Input production can be delayed until all previous input and the corresponding output has been consumed. Creating the process would result in the state: Process Max Hold Claim Free 1 70 45 25 25 2 60 40 20 3 60 15 45 4 60 25 35 There is sufficient free memory to guarantee the termination of either P1 or P2.

After that, the remaining three jobs can be completed in any order. Creating the process would result in the trivially unsafe state: Process Max Hold Claim Free 1 70 45 25 15 2 60 40 20 3 60 15 45 4 60 35 25 6.

Most OS's ignore deadlock. But Solaris only lets the superuser use the last process table slot. The buffer is declared to be an array of shared elements of type T. Another array defines the number of input elements available to each process. Each process keeps track of the index j of the buffer element it is referring to at the moment. The notation region v do S means that at most one process at a time can enter the critical region associated with variable v to perform statement S.

A deadlock is a situation in which: Deadlock occurs if all resource units are reserved while one or more processes are waiting indefinitely for more units. But, if all 4 units are reserved, at least one process has acquired 2 units.

Therefore, that process will be able to complete its work and release both units, thus enabling another process to continue.

You might also like: WORK THE SYSTEM PDF

Then we have: So a deadlock cannot occur. In the state shown in the problem, if one additional unit is available, P2 can run to completion, releasing its resources, making 2 units available.

This would allow P1 to run to completion making 3 units available. But at this point P3 needs 6 units and P4 needs 5 units. If to begin with, there had been 3 units available instead of 1 unit, there would now be 5 units available. This would allow P4 to run to completion, making 7 units available, which would allow P3 to run to completion. In order from most-concurrent to least, there is a rough partial order on the deadlock-handling algorithms: Their effects after deadlock is detected are harder to characterize: The third algorithm is the strangest, since so much of its concurrency will be useless repetition; because threads compete for execution time, this algorithm also prevents useful computation from advancing.

Hence it is listed twice in this ordering, at both extremes. The banker's algorithm prevents unsafe allocations a proper superset of deadlock-producing allocations and resource ordering restricts allocation sequences so that threads have fewer options as to whether they must wait or not.

By reserving all resources in advance, threads have to wait longer and are more likely to block other threads while they work, so the system-wide execution is in effect more linear. In order from most-efficient to least, there is a rough partial order on the deadlock-handling algorithms: Notice that this is a result of the same static restrictions that made these rank poorly in concurrency.

Resource-dependency chains are bounded by the number of threads, the number of resources, and the number of allocations.

First, because threads run the risk of restarting, they have a low probability of completing. Second, they are competing with other restarting threads for finite execution time, so the entire system advances towards completion slowly if at all. This ordering does not change when deadlock is more likely. The algorithms in the first group incur no additional runtime penalty because they statically disallow deadlock-producing execution.

The second group incurs a minimal, bounded penalty when deadlock occurs. The algorithm in the third tier incurs the unrolling cost, which is O n in the number of memory writes performed between checkpoints. The status of the final algorithm is questionable because the algorithm does not allow deadlock to occur; it might be the case that unrolling becomes more expensive, but the behavior of this restart algorithm is so variable that accurate comparative analysis is nearly impossible.

Assume that the table is in deadlock, i. Since Pj clutches his left fork and cannot have his right fork, his right neighbor Pk never completes his dinner and is also a lefty. Continuing the argument rightward around the table shows that all philosophers in D are lefties. This contradicts the existence of at least one righty. Therefore deadlock is not possible. Assume that lefty Pj starves, i. Suppose Pj holds no fork.

Then Pj's left neighbor Pi must continually hold his right fork and never finishes eating. Thus Pi is a righty holding his right fork, but never getting his left fork to complete a meal, i. Now Pi's left neighbor must be a righty who continually holds his right fork.

Proceeding leftward around the table with this argument shows that all philosophers are starving righties. But Pj is a lefty: Thus Pj must hold one fork. As Pj continually holds one fork and waits for his right fork, Pj's right neighbor Pk never sets his left fork down and never completes a meal, i. If Pk did not continually hold his left fork, Pj could eat; therefore Pk holds his left fork.

Carrying the argument rightward around the table shows that all philosophers are starving lefties: Starvation is thus precluded. The logic is essentially the same. The solution of Figure 6. Therefore, a simple read operation cannot be used, but a special read operation for the atomic data type is needed. For example, c could equal 4 what we expect , yet d could equal 1 not what we expect. Using the mb insures a and b are written in the intended order, while the rmb insures c and d are read in the intended order.

This example is from [LOVE04]. In addition, we would like to be able to swap active processes in and out of main memory to maximize processor utilization by providing a large pool of ready processes to execute.

In both these cases, the specific location of the process in main memory is unpredictable. Furthermore, most programming languages allow the dynamic calculation of addresses at run time, for example by computing an array subscript or a pointer into a data structure. Hence all memory references generated by a process must be checked at run time to ensure that they refer only to the memory space allocated to that process.

Also, processes that are cooperating on some task may need to share access to the same data structure. It is possible to provide one or two quite large partitions and still have a large number of partitions. The large partitions can allow the entire loading of large programs. Internal fragmentation is reduced because a small program can be put into a small partition.

External fragmentation is a phenomenon associated with dynamic partitioning, and refers to the fact that a large number of small areas of main memory external to any partition accumulates. A relative address is a particular example of logical address, in which the address is expressed as a location relative to some known point, usually the beginning of the program.

A physical address, or absolute address, is an actual location in main memory. Exactly one page can fit in one frame. In this case, the program and its associated data are divided into a number of segments.

It is not required that all segments of all programs be of the same length, although there is a maximum segment length. Eight bits are needed to identify one of the 28 partitions. The probability that a given segment is followed by a hole in memory and not by another segment is 0. It is intuitively reasonable that the number of holes must be less than the number of segments because neighboring segments can be combined into a single hole on deletion.

The worst fit algorithm maximizes the chance that the free space left after a placement will be large enough to satisfy another request, thus minimizing the frequency of compaction.

The disadvantage of this approach is that the largest blocks are allocated first; therefore a request for a large area is more likely to fail. The 40 M block fits into the second hole, with a starting address of 80M. The 20M block fits into the first hole, with a starting address of 20M. The 10M block is placed at location M.

This scheme offers more block sizes than a binary buddy system, and so has the potential for less internal fragmentation, but can cause additional external fragmentation because many uselessly small blocks are created. However, we wish the program to be relocatable. Therefore, it might be preferable to use relative addresses in the instruction register. Alternatively, the address in the instruction register can be converted to relative when a process is swapped out of memory.

Therefore, 26 bits are required for the logical address. A frame is the same size as a page, bytes. So 22 bits is needed to specify the frame. There is one entry for each page in the logical address space. Therefore there are entries. Segment 0 starts at location Segment 1 has a length of bytes, so this address triggers a segment fault. Observe that a reference occurs to some segment in memory each time unit, and that one segment is deleted every t references.

The system's operation time t0 is then the time required for the boundary to cross the hole, i. The compaction operation requires two memory references—a fetch and a store—plus overhead for each of the 1 — f m words to be moved, i. Virtual memory paging: In general, the principle of locality allows the algorithm to predict which resident pages are least likely to be referenced in the near future and are therefore good candidates for being swapped out.

Its purpose is to avoid, most of the time, having to go to disk to retrieve a page table entry. With prepaging, pages other than the one demanded by a page fault are brought in.

Page replacement policy deals with the following issue: The working set of a process is the number of pages of that process that have been referenced recently. A precleaning policy writes modified pages before their page frames are needed so that pages can be written out in batches. Split binary address into virtual page number and offset; use VPN as index into page table; extract page frame number; concatenate offset to get physical memory address b.

Thus, each page table can handle 8 of the required 22 bits. Therefore, 3 levels of page tables are needed. Tables at two of the levels have 28 entries; tables at one level have 26 entries. Less space is consumed if the top level has 26 entries.

Number of rows: Each entry consist of: PFN 3 since loaded longest ago at time 20 b. PFN 1 since referenced longest ago at time c. These two policies are equally effective for this particular page trace. This occurs for two reasons: Of course, there is a disadvantage: The P bit in each segment table entry provides protection for the entire segment.

The address space however is bytes. Adding a second layer of page tables, the top page table would point to page tables, addressing a total of bytes. But only 2 bits of the 6th level are required, not the entire 10 bits.

So instead of requiring your virtual addresses be 72 bits long, you could mask out and ignore all but the 2 lowest order bits of the 6th level. Your top level page table then would have only 4 entries. Yet another option is to revise the criteria that the top level page table fit into a single physical page and instead make it fit into 4 pages. This would save a physical page, which is not much. This is a familiar effective time calculation: First, when the TLB contains the entry required.

In that case we pay the 20 ns overhead on top of the ns memory access time. Second, when the TLB does not contain the item. Then we pay an additional ns to get the required entry into the TLB. Snow falling on the track is analogous to page hits on the circular clock buffer. Note that the density of replaceable pages is highest immediately in front of the clock pointer, just as the density of snow is highest immediately in front of the plow.

In fact, it can be shown that the depth of the snow in front of the plow is twice the average depth on the track as a whole. By this analogy, the number of pages replaced by the CLOCK policy on a single circuit should be twice the number that are replaceable at a random time.

The analogy is imperfect because the CLOCK pointer does not move at a constant rate, but the intuitive idea remains. The Art of Computer Programming, Volume 2: Sorting and Searching. Reading, MA: Addison-Wesley, page The operating system can maintain a number of queues of page-frame tables. A page-frame table entry moves from one queue to another according to how long the reference bit from that page frame stays set to zero.

When pages must be replaced, the pages to be replaced are chosen from the queue of the longest-life nonreferenced frames. Use a mechanism that adjusts the value of Q at each window time as a function of the actual page fault rate experienced during the window. The page fault rate is computed and compared with a system- wide value for "desirable" page fault rate for a job. The value of Q is adjusted upward downward whenever the actual page fault rate of a job is higher lower than the desirable value.

Experimentation using this adjustment mechanism showed that execution of the test jobs with dynamic adjustment of Q consistently produced a lower number of page faults per execution and a decreased average resident set size than the execution with a constant value of Q within a very broad range.

The memory time product MT versus Q using the adjustment mechanism also produced a consistent and considerable improvement over the previous test results using a constant value of Q. If total number of entries stays at 32 and the page size does not change, then each entry becomes 8 bits wide. By convention, the contents of memory beyond the current top of the stack are undefined. On almost all architectures, the current top of stack pointer is kept in a well-defined register.

Therefore, the kernel can read its contents and deallocate any unused pages as needed. The reason that this is not done is that little is gained by the effort. If the user program will repeatedly call subroutines that need additional space for local variables a very likely case , then much time will be wasted deallocating stack space in between calls and then reallocating it later on.

If the subroutine called is only used once during the life of the program and no other subroutine will ever be called that needs the stack space, then eventually the kernel will page out the unused portion of the space if it needs the memory for other purposes. In either case, the extra logic needed to recognize the case where a stack could be shrunk is unwarranted.

The decision to add to the pool of processes to be executed. Medium-term scheduling: The decision to add to the number of processes that are partially or fully in main memory. Short-term scheduling: The decision as to which available process will be executed by the processor 9. Response time is the elapsed time between the submission of a request until the response begins to appear as output.

Some systems, such as Windows, use the opposite convention: The currently running process may be interrupted and moved to the Ready state by the operating system. The decision to preempt may be performed when a new process arrives, when an interrupt occurs that places a blocked process in the Ready state, or periodically based on a clock interrupt.

When the currently- running process ceases to execute, the process that has been in the ready queue the longest is selected for running.

When the interrupt occurs, the currently running process is placed in the ready queue, and the next ready job is selected on a FCFS basis. In this case, the scheduler always chooses the process that has the shortest expected remaining processing time.

When a new process joins the ready queue, it may in fact have a shorter remaining time than the currently running process.

Accordingly, the scheduler may preempt whenever a new process becomes ready. When a process first enters the system, it is placed in RQ0 see Figure 9. After its first execution, when it returns to the Ready state, it is placed in RQ1. Each subsequent time that it is preempted, it is demoted to the next lower-priority queue. A shorter process will complete quickly, without migrating very far down the hierarchy of ready queues.

A longer process will gradually drift downward. Thus, newer, shorter processes are favored over older, longer processes. Within each queue, except the lowest-priority queue, a simple FCFS mechanism is used.

Once in the lowest-priority queue, a process cannot go lower, but is returned to this queue repeatedly until it completes execution. The proof can be extended to cover later arrivals. A sophisticated analysis of this type of estimation procedure is contained in Applied Optimal Estimation, edited by Gelb, M.

Press, If you do, then it is entitled to 2 additional time units before it can be preempted. At that time, job 3 will have the smallest response ratio of the three: Here the response ratio of job 1 is the smaller, and consequently job 2 is selected for service at time t. This algorithm is repeated each time a job is completed to take new arrivals into account.

Note that this algorithm is not quite the same as highest response ratio next. The latter would schedule job 1 at time t. Intuitively, it is clear that the present algorithm attempts to minimize the maximum response ratio by consistently postponing jobs that will suffer the least increase of their response ratios. Mondrup, is reported in [BRIN73]. Consider the queue at time t immediately after a departure and ignore further arrivals.

The waiting jobs are numbered 1 to n in the order in which they will be scheduled: The above inequality can therefore be extended as follows: Notice that this proof is valid in general for priorities that are non- decreasing functions of time.

For example, in a FIFO system, priorities increase linearly with waiting time at the same rate for all jobs.Useful tools and tutorials. A sophisticated analysis of this type of estimation procedure is contained in Applied Optimal Estimation, edited by Gelb, M.

An operating system is deterministic to the extent that it performs operations at fixed, predetermined times or within predetermined time intervals. This would allow P4 to run to completion, making 7 units available, which would allow P3 to run to completion.

A then locks the mutex line 13 , signals the condition variable line 15 , and then unlocks the mutex line Hand geometry:

DENA from Amarillo
Look over my other articles. I enjoy shogi. I relish reading comics heavily .
>