Jump to content

Critical section: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m linking
Thashley (talk | contribs)
Streamlined text and corrected grammar
 
(20 intermediate revisions by 11 users not shown)
Line 1: Line 1:
{{Short description|Protected section of code that cannot be executed by more than one process at a time}}
In [[concurrent programming]], concurrent accesses to [[shared resource]]s can lead to unexpected or erroneous behavior, so parts of the program where the shared resource is accessed need to be protected in ways that avoid the concurrent access. This protected section is the '''critical section''' or '''critical region.''' It cannot be executed by more than one process at a time. Typically, the critical section accesses a shared resource, such as a [[data structure]], a peripheral device, or a network connection, that would not operate correctly in the context of multiple concurrent accesses.<ref>{{Cite book|title=Concurrent Programming: Algorithms, Principles, and Foundations|last=Raynal|first=Michel|publisher=Springer Science & Business Media|year=2012|isbn=978-3642320279|pages=9}}</ref>
In [[concurrent programming]], concurrent accesses to [[shared resource]]s can lead to unexpected or erroneous behavior. Thus, the parts of the program where the shared resource is accessed need to be protected in ways that avoid the concurrent access. One way to do so is known as a '''critical section''' or '''critical region'''. This protected section cannot be entered by more than one process or thread at a time; others are suspended until the first leaves the critical section. Typically, the critical section accesses a shared resource, such as a [[data structure]], peripheral device, or network connection, that would not operate correctly in the context of multiple concurrent accesses.<ref>{{Cite book |title=Concurrent Programming: Algorithms, Principles, and Foundations |last=Raynal |first=Michel |publisher=Springer Science & Business Media |year=2012 |isbn=978-3642320279 |pages=9}}</ref>


== Need for critical sections ==
== Need for critical sections ==


Different codes or processes may consist of the same variable or other resources that need to be read or written but whose results depend on the order in which the actions occur. For example, if a variable {{var|x}} is to be read by process A, and process B has to write to the same variable {{var|x}} at the same time, process A might get either the old or new value of {{var|x}}.
Different codes or processes may consist of the same variable or other resources that must be read or written but whose results depend on the order in which the actions occur. For example, if a variable {{var|x}} is to be read by process A, and process B must write to the variable {{var|x}} at the same time, process A might get either the old or new value of {{var|x}}.
[[File:Critical section fg.jpg|thumb|Flow graph depicting need for critical section]]


'''Process A:''' <syntaxhighlight lang="c">
'''Process A:'''
<syntaxhighlight lang="c">
// Process A
// Process A
.
.
.
.
b = x + 5; // instruction executes at time = Tx
b = x + 5; // instruction executes at time = Tx
.
.
</syntaxhighlight>'''Process B:''' <syntaxhighlight lang="c">
</syntaxhighlight>
'''Process B:'''
<syntaxhighlight lang="c">
// Process B
// Process B
.
.
.
.
x = 3 + z; // instruction executes at time = Tx
x = 3 + z; // instruction executes at time = Tx
.
.
</syntaxhighlight>
</syntaxhighlight>

[[File:Critical section fg.jpg|thumb|Fig 1: Flow graph depicting need for critical section]]
In cases like these, a critical section is important. In the above case, if A needs to read the updated value of {{var|x}}, executing Process A and Process B at the same time may not give required results. To prevent this, variable {{var|x}} is protected by a critical section. First, B gets the access to the section. Once B finishes writing the value, A gets the access to the critical section and variable {{var|x}} can be read.
In cases where a [[Lock (computer science)|locking]] mechanism with finer granularity is not needed, a critical section is important. In the above case, if A needs to read the updated value of {{var|x}}, executing process A, and process B at the same time may not give required results. To prevent this, variable {{var|x}} is protected by a critical section. First, B gets the access to the section. Once B finishes writing the value, A gets the access to the critical section, and variable {{var|x}} can be read.


By carefully controlling which variables are modified inside and outside the critical section, concurrent access to the shared variable are prevented. A critical section is typically used when a multi-threaded program must update multiple related variables without a separate thread making conflicting changes to that data. In a related situation, a critical section may be used to ensure that a shared resource, for example, a printer, can only be accessed by one process at a time.
By carefully controlling which variables are modified inside and outside the critical section, concurrent access to the shared variable are prevented. A critical section is typically used when a multi-threaded program must update multiple related variables without a separate thread making conflicting changes to that data. In a related situation, a critical section may be used to ensure that a shared resource, for example, a printer, can only be accessed by one process at a time.


== Implementation of critical sections ==
== Implementation of critical sections ==
The implementation of critical sections vary among different operating systems.[[File:Locks and critical sections.jpg|thumb|381x381px|Fig 2: Locks and critical sections in multiple threads]]
The implementation of critical sections vary among different operating systems.
A critical section will usually terminate in finite time,<ref name=":0">{{Cite book|title=GNU/Linux Application Programming (2nd ed.). [Hingham, Mass.]|last=Jones|first=M. Tim|publisher=Charles River Media|year=2008|isbn=978-1-58450-568-6|pages=264}}</ref> and a thread, task, or process will have to wait for a fixed time to enter it ([[Peterson's algorithm#Bounded waiting|bounded waiting]]). To ensure exclusive use of critical sections some synchronization mechanism is required at the entry and exit of the program.


A critical section will usually terminate in finite time,<ref name=":0">{{Cite book |title=GNU/Linux Application Programming (2nd ed.). [Hingham, Mass.] |last=Jones |first=M. Tim |publisher=Charles River Media |year=2008 |isbn=978-1-58450-568-6 |pages=264}}</ref> and a thread, task, or process must wait for a fixed time to enter it ([[Peterson's algorithm#Bounded waiting|bounded waiting]]). To ensure exclusive use of critical sections, some synchronization mechanism is required at the entry and exit of the program.
Critical section is a piece of a program that requires [[mutual exclusion]] of access.


A critical section is a piece of a program that requires [[mutual exclusion]] of access.
As shown in Fig 2,<ref>{{Cite journal|last=Chen, Stenstrom|first=Guancheng, Per|date=Nov 10–16, 2012|title=Critical Lock Analysis: Diagnosing Critical Section Bottlenecks in Multithreaded Applications|journal=High Performance Computing, Networking, Storage and Analysis (SC), 2012 International Conference|pages=1–11|doi=10.1109/sc.2012.40|isbn=978-1-4673-0805-2}}</ref> in the case of mutual exclusion (Mutex), one thread blocks a critical section by using locking techniques when it needs to access the shared resource and other threads have to wait to get their turn to enter into the section. This prevents conflicts when two or more threads share the same memory space and want to access a common resource.<ref name=":0" />[[File:Critical section pseudo code.png|thumb|289x289px|Fig 3: Pseudo code for implementing critical section]]
The simplest method to prevent any change of processor control inside the critical section is implementing a semaphore. In uni processor systems, this can be done by disabling interrupts on entry into the critical section, avoiding system calls that can cause a [[context switch]] while inside the section, and restoring interrupts to their previous state on exit. Any thread of execution entering any critical section anywhere in the system will, with this implementation, prevent any other thread, including an interrupt, from being granted processing time on the CPU—and therefore from entering any other critical section or, indeed, any code whatsoever—until the original thread leaves its critical section.


[[File:Locks and critical sections.jpg|thumb|381x381px|Locks and critical sections in multiple threads]]
This brute-force approach can be improved upon by using [[Semaphore (programming)|Semaphore]]s. To enter a critical section, a thread must obtain a semaphore, which it releases on leaving the section. Other threads are prevented from entering the critical section at the same time as the original thread, but are free to gain control of the CPU and execute other code, including other critical sections that are protected by different semaphores. Semaphore locking also has a time limit to prevent a deadlock condition in which a lock is acquired by a single process for an infinite time stalling the other processes which need to use the shared resource protected by the critical session.
As shown in the figure,<ref>{{Cite book |last=Chen, Stenstrom |first=Guancheng, Per |title=2012 International Conference for High Performance Computing, Networking, Storage and Analysis |chapter=Critical lock analysis: Diagnosing critical section bottlenecks in multithreaded applications |date=Nov 10–16, 2012 |pages=1–11 |doi=10.1109/sc.2012.40 |isbn=978-1-4673-0805-2|s2cid=12519578 }}</ref> in the case of mutual exclusion ([[mutex]]), one thread blocks a critical section by using locking techniques when it needs to access the shared resource, and other threads must wait their turn to enter the section. This prevents conflicts when two or more threads share the same memory space and want to access a common resource.<ref name=":0" />

[[File:Critical section pseudo code.png|thumb|289x289px|Pseudocode for implementing critical section]]
The simplest method to prevent any change of processor control inside the critical section is implementing a semaphore. In uniprocessor systems, this can be done by disabling interrupts on entry into the critical section, avoiding system calls that can cause a [[context switch]] while inside the section, and restoring interrupts to their previous state on exit. With this implementation, any execution thread entering any critical section in the system will prevent any other thread, including an interrupt, from being granted processing time on the CPU until the original thread leaves its critical section.

This brute-force approach can be improved by using [[Semaphore (programming)|semaphores]]. To enter a critical section, a thread must obtain a semaphore, which it releases on leaving the section. Other threads are prevented from entering the critical section at the same time as the original thread, but are free to gain control of the CPU and execute other code, including other critical sections that are protected by different semaphores. Semaphore locking also has a time limit to prevent a deadlock condition in which a lock is acquired by a single process for an infinite time, stalling the other processes that need to use the shared resource protected by the critical section.


==Uses of critical sections ==
==Uses of critical sections ==
Line 45: Line 54:
Similarly, if an [[interrupt]] occurs in a critical section, the interrupt information is recorded for future processing, and execution is returned to the process or thread in the critical section.<ref>{{Cite journal|date=November 2011|title=RESEARCH PAPER ON SOFTWARE SOLUTION OF CRITICAL SECTION PROBLEM|url=https://www.researchgate.net/publication/268031056|journal=International Journal of Advance Technology & Engineering Research (IJATER)|volume=1}}</ref> Once the critical section is exited, and in some cases the scheduled quantum completed, the pending interrupt will be executed. The concept of scheduling quantum applies to "[[Round-robin scheduling|round-robin]]" and similar [[Scheduling (computing)|scheduling policies]].
Similarly, if an [[interrupt]] occurs in a critical section, the interrupt information is recorded for future processing, and execution is returned to the process or thread in the critical section.<ref>{{Cite journal|date=November 2011|title=RESEARCH PAPER ON SOFTWARE SOLUTION OF CRITICAL SECTION PROBLEM|url=https://www.researchgate.net/publication/268031056|journal=International Journal of Advance Technology & Engineering Research (IJATER)|volume=1}}</ref> Once the critical section is exited, and in some cases the scheduled quantum completed, the pending interrupt will be executed. The concept of scheduling quantum applies to "[[Round-robin scheduling|round-robin]]" and similar [[Scheduling (computing)|scheduling policies]].


Since critical sections may [[Execution (computers)|execute]] only on the processor on which they are entered, synchronization is only required within the executing processor. This allows critical sections to be entered and exited at almost zero cost. No inter-processor synchronization is required. Only instruction stream synchronization<ref>{{Cite journal|last=Dubois, Scheurich|first=Michel, Christoph|title=Synchronization, Coherence, and Event Ordering in Multiprocessors|url=https://api.semanticscholar.org/CorpusID:1749330|journal=Survey and Tutorial Series|year=1988|volume=21|issue=2|pages=9–21|doi=10.1109/2.15}}</ref> is needed. Most processors provide the required amount of synchronization by the simple act of interrupting the current execution state. This allows critical sections in most cases to be nothing more than a per processor count of critical sections entered.
Since critical sections may [[Execution (computers)|execute]] only on the processor on which they are entered, synchronization is only required within the executing processor. This allows critical sections to be entered and exited at almost no cost. No inter-processor synchronization is required. Only instruction stream synchronization is needed.<ref>{{Cite journal |last=Dubois, Scheurich |first=Michel, Christoph |year=1988 |title=Synchronization, Coherence, and Event Ordering in Multiprocessors |journal=Survey and Tutorial Series |volume=21 |issue=2 |pages=9–21 |doi=10.1109/2.15 |s2cid=1749330}}</ref> Most processors provide the required amount of synchronization by interrupting the current execution state. This allows critical sections in most cases to be nothing more than a per processor count of critical sections entered.


Performance enhancements include executing pending interrupts at the exit of all critical sections and allowing the scheduler to run at the exit of all critical sections. Furthermore, pending interrupts may be transferred to other processors for execution.
Performance enhancements include executing pending interrupts at the exit of all critical sections and allowing the scheduler to run at the exit of all critical sections. Furthermore, pending interrupts may be transferred to other processors for execution.


Critical sections should not be used as a long-lasting locking primitive. Critical sections should be kept short enough so that it can be entered, executed, and exited without any interrupts occurring from the [[Personal computer hardware|hardware]] and the scheduler.
Critical sections should not be used as a long-lasting locking primitive. Critical sections should be kept short enough so they can be entered, executed, and exited without any interrupts occurring from the [[Computer hardware|hardware]] and the scheduler.


Kernel-level critical sections are the base of the [[software lockout]] issue.
Kernel-level critical sections are the base of the [[software lockout]] issue.


=== Critical sections in data structures ===
=== Critical sections in data structures ===
In parallel programming, the code is divided into threads. The [[Read–write conflict|read-write conflicting]] variables are split between threads and each thread has a copy of them. Data structures like [[linked list|linked lists]], [[Tree (data structure)|trees]], [[hash table]]s etc. have data variables that are linked and cannot be split between threads and hence implementing parallelism is very difficult.<ref name=":1">{{Cite book|title=Fundamentals of Parallel Multicore Architecture|last=Solihin|first=Yan|date=17 November 2015|isbn=9781482211184}}</ref> To improve the efficiency of implementing data structures multiple operations like insertion, deletion, search need to be executed in parallel. While performing these operations, there may be scenarios where the same element is being searched by one thread and is being deleted by another. In such cases, the output may be [[Erroneous program|erroneous]]. The thread searching the element may have a hit, whereas the other thread may delete it just after that time. These scenarios will cause issues in the program running by providing false data. To prevent this, one method is that the entire data-structure can be kept under critical section so that only one operation is handled at a time. Another method is locking the node in use under critical section, so that other operations do not use the same node. Using critical section, thus, ensures that the code provides expected outputs.<ref name=":1" />
In parallel programming, the code is divided into threads. The [[Read–write conflict|read-write conflicting]] variables are split between threads and each thread has a copy of them. Data structures such as [[linked list]]s, [[Tree (data structure)|trees]], and [[hash table]]s have data variables that are linked and cannot be split between threads; hence, implementing parallelism is very difficult.<ref name=":1">{{Cite book|title=Fundamentals of Parallel Multicore Architecture|last=Solihin|first=Yan|date=17 November 2015|publisher=Taylor & Francis |isbn=9781482211184}}</ref> To improve the efficiency of implementing data structures, multiple operations such as insertion, deletion, and search can be executed in parallel. While performing these operations, there may be scenarios where the same element is being searched by one thread and is being deleted by another. In such cases, the output may be [[Erroneous program|erroneous]]. The thread searching the element may have a hit, whereas the other thread may subsequently delete it. These scenarios will cause issues in the program running by providing false data. To prevent this, one method is to keep the entire data structure under critical section so that only one operation is handled at a time. Another method is locking the node in use under critical section, so that other operations do not use the same node. Using critical section, thus, ensures that the code provides expected outputs.<ref name=":1" />


=== Critical sections in computer networking ===
=== Critical sections in relation to peripherals ===

Critical sections are also needed in [[Computer network|computer networking.]] When the data arrives at [[network socket]]s, it may not arrive in an ordered format. Let's say program ‘X’ running on the machine needs to collect the data from the socket, rearrange it and check if anything is missing. While this program works on the data, no other program should access the same socket for that particular data. Hence, the data of the socket is protected by a critical section so that program ‘X’ can use it exclusively.
Critical sections also occur in code which manipulates external peripherals, such as I/O devices. The registers of a peripheral must be programmed with certain values in a certain sequence. If two or more processes control a device simultaneously, neither process will have the device in the state it requires and incorrect behavior will ensue.

When a complex unit of information must be produced on an output device by issuing multiple output operations, exclusive access is required so that another process does not corrupt the datum by interleaving its own bits of output.

In the input direction, exclusive access is required when reading a complex datum via multiple separate input operations. This prevents another process from consuming some of the pieces, causing corruption.

Storage devices provide a form of memory. The concept of critical sections is equally relevant to storage devices as to shared data structures in main memory. A process which performs multiple access or update operations on a file is executing a critical section that must be guarded with an appropriate file locking mechanism.


==See also==
==See also==
* [[Database transaction]]
* [[Dekker's algorithm]]
* [[Eisenberg & McGuire algorithm]]
* [[Lamport's bakery algorithm]]
* [[Lock (computer science)]]
* [[Lock (computer science)]]
* [[Mutual exclusion]]
* [[Mutual exclusion]]
* [[Lamport's bakery algorithm]]
* [[Dekker's algorithm]]
* [[Eisenberg & McGuire algorithm]]
* [[Szymański's algorithm]]
* [[Peterson's algorithm]]
* [[Peterson's algorithm]]
* [[Szymański's algorithm]]


==References==
==References==
Line 79: Line 96:


{{DEFAULTSORT:Critical Section}}
{{DEFAULTSORT:Critical Section}}

[[Category:Concurrency control]]
[[Category:Concurrency control]]
[[Category:Programming constructs]]
[[Category:Programming constructs]]

Latest revision as of 21:38, 11 April 2024

In concurrent programming, concurrent accesses to shared resources can lead to unexpected or erroneous behavior. Thus, the parts of the program where the shared resource is accessed need to be protected in ways that avoid the concurrent access. One way to do so is known as a critical section or critical region. This protected section cannot be entered by more than one process or thread at a time; others are suspended until the first leaves the critical section. Typically, the critical section accesses a shared resource, such as a data structure, peripheral device, or network connection, that would not operate correctly in the context of multiple concurrent accesses.[1]

Need for critical sections

[edit]

Different codes or processes may consist of the same variable or other resources that must be read or written but whose results depend on the order in which the actions occur. For example, if a variable x is to be read by process A, and process B must write to the variable x at the same time, process A might get either the old or new value of x.

Flow graph depicting need for critical section

Process A:

// Process A
.
.
b = x + 5; // instruction executes at time = Tx
.

Process B:

// Process B
.
.
x = 3 + z; // instruction executes at time = Tx
.

In cases where a locking mechanism with finer granularity is not needed, a critical section is important. In the above case, if A needs to read the updated value of x, executing process A, and process B at the same time may not give required results. To prevent this, variable x is protected by a critical section. First, B gets the access to the section. Once B finishes writing the value, A gets the access to the critical section, and variable x can be read.

By carefully controlling which variables are modified inside and outside the critical section, concurrent access to the shared variable are prevented. A critical section is typically used when a multi-threaded program must update multiple related variables without a separate thread making conflicting changes to that data. In a related situation, a critical section may be used to ensure that a shared resource, for example, a printer, can only be accessed by one process at a time.

Implementation of critical sections

[edit]

The implementation of critical sections vary among different operating systems.

A critical section will usually terminate in finite time,[2] and a thread, task, or process must wait for a fixed time to enter it (bounded waiting). To ensure exclusive use of critical sections, some synchronization mechanism is required at the entry and exit of the program.

A critical section is a piece of a program that requires mutual exclusion of access.

Locks and critical sections in multiple threads

As shown in the figure,[3] in the case of mutual exclusion (mutex), one thread blocks a critical section by using locking techniques when it needs to access the shared resource, and other threads must wait their turn to enter the section. This prevents conflicts when two or more threads share the same memory space and want to access a common resource.[2]

Pseudocode for implementing critical section

The simplest method to prevent any change of processor control inside the critical section is implementing a semaphore. In uniprocessor systems, this can be done by disabling interrupts on entry into the critical section, avoiding system calls that can cause a context switch while inside the section, and restoring interrupts to their previous state on exit. With this implementation, any execution thread entering any critical section in the system will prevent any other thread, including an interrupt, from being granted processing time on the CPU until the original thread leaves its critical section.

This brute-force approach can be improved by using semaphores. To enter a critical section, a thread must obtain a semaphore, which it releases on leaving the section. Other threads are prevented from entering the critical section at the same time as the original thread, but are free to gain control of the CPU and execute other code, including other critical sections that are protected by different semaphores. Semaphore locking also has a time limit to prevent a deadlock condition in which a lock is acquired by a single process for an infinite time, stalling the other processes that need to use the shared resource protected by the critical section.

Uses of critical sections

[edit]

Kernel-level critical sections

[edit]

Typically, critical sections prevent thread and process migration between processors and the preemption of processes and threads by interrupts and other processes and threads.

Critical sections often allow nesting. Nesting allows multiple critical sections to be entered and exited at little cost.

If the scheduler interrupts the current process or thread in a critical section, the scheduler will either allow the currently executing process or thread to run to completion of the critical section, or it will schedule the process or thread for another complete quantum. The scheduler will not migrate the process or thread to another processor, and it will not schedule another process or thread to run while the current process or thread is in a critical section.

Similarly, if an interrupt occurs in a critical section, the interrupt information is recorded for future processing, and execution is returned to the process or thread in the critical section.[4] Once the critical section is exited, and in some cases the scheduled quantum completed, the pending interrupt will be executed. The concept of scheduling quantum applies to "round-robin" and similar scheduling policies.

Since critical sections may execute only on the processor on which they are entered, synchronization is only required within the executing processor. This allows critical sections to be entered and exited at almost no cost. No inter-processor synchronization is required. Only instruction stream synchronization is needed.[5] Most processors provide the required amount of synchronization by interrupting the current execution state. This allows critical sections in most cases to be nothing more than a per processor count of critical sections entered.

Performance enhancements include executing pending interrupts at the exit of all critical sections and allowing the scheduler to run at the exit of all critical sections. Furthermore, pending interrupts may be transferred to other processors for execution.

Critical sections should not be used as a long-lasting locking primitive. Critical sections should be kept short enough so they can be entered, executed, and exited without any interrupts occurring from the hardware and the scheduler.

Kernel-level critical sections are the base of the software lockout issue.

Critical sections in data structures

[edit]

In parallel programming, the code is divided into threads. The read-write conflicting variables are split between threads and each thread has a copy of them. Data structures such as linked lists, trees, and hash tables have data variables that are linked and cannot be split between threads; hence, implementing parallelism is very difficult.[6] To improve the efficiency of implementing data structures, multiple operations such as insertion, deletion, and search can be executed in parallel. While performing these operations, there may be scenarios where the same element is being searched by one thread and is being deleted by another. In such cases, the output may be erroneous. The thread searching the element may have a hit, whereas the other thread may subsequently delete it. These scenarios will cause issues in the program running by providing false data. To prevent this, one method is to keep the entire data structure under critical section so that only one operation is handled at a time. Another method is locking the node in use under critical section, so that other operations do not use the same node. Using critical section, thus, ensures that the code provides expected outputs.[6]

Critical sections in relation to peripherals

[edit]

Critical sections also occur in code which manipulates external peripherals, such as I/O devices. The registers of a peripheral must be programmed with certain values in a certain sequence. If two or more processes control a device simultaneously, neither process will have the device in the state it requires and incorrect behavior will ensue.

When a complex unit of information must be produced on an output device by issuing multiple output operations, exclusive access is required so that another process does not corrupt the datum by interleaving its own bits of output.

In the input direction, exclusive access is required when reading a complex datum via multiple separate input operations. This prevents another process from consuming some of the pieces, causing corruption.

Storage devices provide a form of memory. The concept of critical sections is equally relevant to storage devices as to shared data structures in main memory. A process which performs multiple access or update operations on a file is executing a critical section that must be guarded with an appropriate file locking mechanism.

See also

[edit]

References

[edit]
  1. ^ Raynal, Michel (2012). Concurrent Programming: Algorithms, Principles, and Foundations. Springer Science & Business Media. p. 9. ISBN 978-3642320279.
  2. ^ a b Jones, M. Tim (2008). GNU/Linux Application Programming (2nd ed.). [Hingham, Mass.] Charles River Media. p. 264. ISBN 978-1-58450-568-6.
  3. ^ Chen, Stenstrom, Guancheng, Per (Nov 10–16, 2012). "Critical lock analysis: Diagnosing critical section bottlenecks in multithreaded applications". 2012 International Conference for High Performance Computing, Networking, Storage and Analysis. pp. 1–11. doi:10.1109/sc.2012.40. ISBN 978-1-4673-0805-2. S2CID 12519578.{{cite book}}: CS1 maint: multiple names: authors list (link)
  4. ^ "RESEARCH PAPER ON SOFTWARE SOLUTION OF CRITICAL SECTION PROBLEM". International Journal of Advance Technology & Engineering Research (IJATER). 1. November 2011.
  5. ^ Dubois, Scheurich, Michel, Christoph (1988). "Synchronization, Coherence, and Event Ordering in Multiprocessors". Survey and Tutorial Series. 21 (2): 9–21. doi:10.1109/2.15. S2CID 1749330.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  6. ^ a b Solihin, Yan (17 November 2015). Fundamentals of Parallel Multicore Architecture. Taylor & Francis. ISBN 9781482211184.
[edit]