Finals Revision
Finals Revision
copyright chocolate-lips jk i know nothings
ganbatteyo~~
DISCLAIMER: THINGS IN THIS REVISION SHEET MAY BE WRONG. IF YOU DO SPOT ANYTHING WRONG, PLEASE SCREAM DIRECTLY INTO MY FACE AND I WILL CHANGE IT AS SOON AS I CAN.
ALSO, THIS REVISION SHEET DOES NOT CONTAIN THE INFORMATION BEFORE MIDTERMS. I HAVE A SEPARATE PAGE FOR THAT AT THE LEFT SIDE OF THIS PAGE.
FINALLY, I SCREENSHOTTED THE MOST IMPORTANT SLIDES AND PLACED THEM HERE, BUT PLEASE REMEMBER TO GO THROUGH ALL OF THE SLIDES BEFORE THE EXAM.
Threads
Stack Frame and Introduction
Addresses - Bottom is Low and Top is High
Every function call (subprogram) has its own assigned stack frame, used to store important information.
The Stack Pointer is used to point to the beginning of the stack frame.
- Stack frame is “allocated” by decrementing stack pointer - in other words, its a way for your function to say “I'm using this bit of the stack”. In other words, if an interrupt happens, anything at addresses less (downwards) than the stack pointer are fair game - the OS can wipe it and replace it with other data.
The Program Counter is, to revise, a value stored in the Program Counter Register containing the address of the currently executing instruction.
Each function does not have the same stack frame size because of different local variable sizes.
Size of stack frame is determined at compile-time.
Note that there are two kinds of stack - a user stack for user-level programs and a kernel stack used by system-calls.
Multithreading
From our good friend Wikipeda,
In computer architecture, multithreading is the ability of a central processing unit (CPU) or a single core in a multi-core processor to execute multiple processes or threads concurrently, appropriately supported by the operating system.
Remember, this means that the number of threads running concurrently at the same time may not be the same as the number of the cores on the CPU!
Also note that threads are a software concept, not a hardware concept.
Advantages of multithreading:
- Responsiveness
- Resource Sharing (Threads share the resources of the parent threads)
- Economy
- Stability
Common Threading Functions
Header required is <pthread.h>
int pthread_create(pthread_t *thread, const pthread_attr_t *attr,
void *(*start_routine) (void *), void *arg);
if have multiple args, then the args are in a struct of which the pointer is put into pthread_create
it returns 0; on error returns an error number and the contents of the thread are undefined.
int pthread_join(pthread_t thread, void **retval);
Similar usage as wait for forking, in that it waits for a certain thread to terminate.
On success, returns 0, else, returns error.
int pthread_detach(pthread_t thread);
The pthread_detach() function marks the thread identified by thread
as detached. When a detached thread terminates, its resources are
automatically released back to the system without the need for
another thread to join with the terminated thread.
On success, pthread_detach() returns 0; on error, it returns an error
number.
int pthread_cancel(pthread_t thread) is kinda like kill (sends a cancellation request to a thread)
After a canceled thread has terminated, a join with that thread using
pthread_join(3) obtains PTHREAD_CANCELED as the thread's exit status.
(Joining with a thread is the only way to know that cancellation has
completed.)
void pthread_exit(void *retval);
The pthread_exit() function terminates the calling thread and returns
a value via retval that (if the thread is joinable) is available to
another thread in the same process that calls pthread_join(3).
In that way, pthread_exit() is very similar to exit for forking.
Some notes:
- A nice use case of join is - say for example the main() function/thread creates a thread and doesn't wait ( using join ) for the created thread to complete and simply exits, then the newly created thread will also stop!
- You should call detach if you're not going to wait for the thread to complete with join but the thread instead will just keep running until it's done and then terminate without having the main thread waiting for it specifically.
detach basically will release the resources needed to be able to implement join.
- Both windows and linux thread calls MIGHT be tested but linux calls are gonna have more emphasis. If you're super paranoid you're gonna want to look up Windows documentation for multithreading.
Synchronization
Atomicity
From Wikipedia:
In concurrent programming, an operation (or set of operations) is atomic, linearizable, indivisible or uninterruptible if it appears to the rest of the system to occur instantaneously. Atomicity is a guarantee of isolation from concurrent processes. Additionally, atomic operations commonly have a succeed-or-fail definition—they either successfully change the state of the system, or have no apparent effect.
In other words, atomic processes cannot be interrupted by another process, while non-atomic processes can. This is because non-atomic processes, even if, sometimes, they are one instruction, are actually made up of multiple instructions.
This becomes problematic when we discuss...
Critical Sections
Critical Sections are basically shared memory that multiple processes can get to and modify. When multiple processes can interrupt each other while modifying the same piece of memory, of course things will mess up.
This particular mess-up is referred to as a race condition.
EXAMPLE:
This is a little hard to understand, so let's start with an example.
Here's an analysis of the Sleeping Barber problem: