Computer Science 161 (2013)
Final Exam
May 7, 2013 - May 8, 2013 5:00 PM
240 minutes/240 points
Instructions
The points allocated to each problem correspond to how much time
we think the problem should take. Although you have 24 hours in
which to take the exam, we do not recommend that you pull an
all-nighter and spend 24 hours completing the exam. Here is Margo's
advice on how to complete this exam:
- Download the exam and read through the entire exam.
- Go to dinner and think about the exam.
- Spend a couple of hours Tuesday evening writing up answers the questions
that you know.
- Get a good night's sleep.
- Spend another couple of hours on Wednesday working on the remaining
problems.
- Read over your answers.
- Email them to me at margo@eecs.harvard.edu.
- Show up in Pierce 309 to consume ice cream, pick up your T-shirt,
optionally iron on a team patch (see Piazza post for details).
Please send your answers to me in a PLAIN TEXT file as an attachment
(in a perfect world, your text file would not have lines longer than
80 characters, but I'll reformat it if it does).
Label the answers clearly so it is
obvious what answers correspond to what questions. Leave space
between each question. Please place your answers in order in the
file. Send your completed exam to margo@eecs.harvard.edu.
I will return the graded exam to you; I tend to engage in a dialog as
much as possible, so there is some educational value in reading more
than just the final number.
This exam is open text book, open source code, and open notes. You
may not use computers (cell phones, PDAs, or any other device that can
talk to a network)
during the exam (other than to record your answers, to view OS/161
or System/161 code, or to examine your notes or the course web site).
Please do not access the Piazza group during the exam.
You must complete the exam on your own.
Exam Policy Statement
At the end of your exam, assuming that you have followed the rules
as stated above, please write the following statement and
then place your name after it, asserting that you have followed the
rules of the exam. "I have neither given nor received help on this exam
and have followed the rules as stated."
If this statement is absent, you will hear from me.
1. Buffer Cache Fun (60 points)
These questions are based on real world challenges in file systems.
A. O_DIRECT
As you learned in assignment 4 and our file system lectures, file
systems often maintain an in-memory collection of pages to speed access
to frequently used files (i.e., the buffer cache).
Software that wishes more explicit control over when data gets to disk
(e.g., transactional database systems) frequently find the buffer cache
a problem. They often implement their own in-memory cache and when they
issue a write system call, they really want the data to go to disk.
In recognition of this problem, some systems provide an open flag
(i.e., a flag to the open system call), O_DIRECT (or the F_NOCACHE
flag to the fcntl system call), that directs the file system not to
cache data for the file in question, but instead to service read/write
requests directly from/to the disk.
Consider the following scenario:
- Process P opens file F with the O_DIRECT flag set.
- The standard cp utility does not set O_DIRECT.
- Now, let's say that P periodically writes blocks to F.
- A shell script wakes up periodically and copies F to make a backup.
If P follows the semantics of O_DIRECT exactly and cp uses the
buffer cache as it currently exists, you can get out-of-date backups.
For example, let's say that at time 1, cp copies file F and leaves
blocks in the buffer cache.
Now P writes a block of F directly to disk (i.e., does not update the
buffer cache).
If cp runs again while its blocks are still in the cache, it could
copy the old version of F without P's most recent update.
Design an implementation of O_DIRECT that solves this problem in the
context of OS/161.
You needn't write pseudocode, but identify the key data structures
you want to change and, in English, describe what parts
of the code you wish to modify and how you wish to modify them.
After you present your design, explain how it avoids the problem
described.
Note: One challenge not explicitly mentioned is that you might have
dirty buffers in the cache when P opens a file O_DIRECT. You may ignore
this situation in this question.
B. Mmap
One of the system calls we did not ask you to implement is mmap.
Mmap takes a file (or parts thereof) and maps them into the address
space of a process.
Thus, it provides another way (in addition to open/close/read/write)
to access file data from a program.
- Sketch out in a paragraph or two how you might implement a limited
form of mmap that simply maps an entire file into a process's virtual
address space at whatever address the VM system wants. For this part
of the question, ignore what happens if anyone tries to extend the
file or if another process accesses the
file using open/close/read/write (that comes in part B).
- Discuss in another paragraph or two the implications that mmap has
on the file system buffer cache and how you might address them.
2. Virtual Memory (40 points)
The LEG processor MMU has the following characteristics:
- It implements a 32-bit virtual address space.
- It has a 64 entry TLB.
- Each entry can contain a global mapping (address space IDs are
ignored) or it can contain a 6-bit address space ID.
- Each entry can map either 1 MB or 4 KB.
- Each entry specifies access of one of: no-access, read-only, read-write.
- It implements 2-level page tables in hardware.
- The page tables are only accessed on a TLB fault.
- The first/top level page table translates either a full 1 MB region or
it points to a second level table.
- The second level page table translates 4 KB pages.
- Access to the page tables is as follows (you might find it useful
to draw a picture for this process).
- The address of the first/top level page table is in a register.
- The top 12 bits of the virtual address index into the top level
page table.
- If the entry is a 1 MB region, then the necessary part of the
physical address resides in the top level table.
- If the entry is a not for a 1 MB entry, then the physical address
of the beginning of second level page table resides in the top level
page table.
First level page table entries are stored as shown. The "A M" bits
represent 2 bits for access modes, with the following meanings.
- 00 = Invalid mapping
- 01 = Mapping is valid, but no access is granted
- 10 = read-only
- 11 = read-write
|
--------------------------------- | |
|
|
|
|
1 MB mapping |
20-bits of physical address | |
|
|
|
Mapping for a second level page table |
22-bits of physical address | |
|
|
|
Second level page table entries look like:
- Why are 12 bits of the Virtual Address used to index into
the top level page table?
- How many entries are there in the top-level page table?
- On a TLB fault handled by the hardware page table path,
how many memory accesses are required to read data stored in a 1 MB
mapped segment?
- On a TLB fault handled by the hardware page table path,
how many memory accesses are required to access data stored in a 4 KB page
(not in a 1 MB segment).
- Which bit(s) of a first level page table entry tell you whether the
address you are translating is part of a 1 MB region or a 4 KB page?
- How many entries are there in a 2nd level page table?
- As described this processor can only access a 32-bit physical address
space. How could you let it access a larger physical address space?
- Is it possible to run the LEG processor on a system with
less than 4 GB (32 bits worth) of physical memory?
- Can two different pages in the same 1 MB region mapped in the
top-level page table have different access protection?
- Can two consecutive 4 KB pages mapped through second level page tables
have different access protection?
For the next 4 problems, assume that part of the top level page table
contains the following entries (the first entry shown corresponds to the
1025th one in the table).
Note that 22-bit entries are represented
using the values 0x000000 through 0x3FFFFF.
---- Beginning of Top Level Table ---- |
---- Skipping Entries 0 - 1023 ---- |
0x50000 | 0xFF | 0x3 | 10 |
0x00000 | 0xFF | 0x0 | 10 |
0x123456 | 0x3F | 0x2 | 01 |
0xDEAD0 | 0xFF | 0x2 | 10 |
0x00BEEF | 0x3F | 0x3 | 01 |
0x2B012B | 0x3F | 0x3 | 01 |
0xABCDE | 0xFF | 0x1 | 10 |
---- Rest of Top Level Table ---- |
- At what address will you find the page table entry for virtual
address 0x40277123?
- Given the information above,
do you know that 0x40277123 is an address from which you are allowed to read?
- What is the physical address for virtual address 0x40666ABC?
- Given the information above, do you know that you can read from that address?
3. Multikernel/BarrelFish (80)
For this question, you will need to read the first eight and a half pages of
this paper on the Multikernel architecture
and its implementation as Barrelfish.
(You are, of course, free to read the entire
paper, but I wanted to keep the portion necessary for the exam
within reason.) Please answer the following questions.
- The abstract states, "An evaluation of our prototype on multicore
systems shows that, even on present-day machines, the performance
of a multikernel is comparable with a conventional OS, and can scale
better to support future hardware." Do you believe that it is sufficient
to show that performance is comparable? Why/Why not? Limit your discussion
to 1-2 paragraphs.
- Figure 3 shows that there is a very small overhead in updating
a data item that spans
eight cache lines among many (> 8) cores using message passing,
but a significant overhead in sharing eight cache lines among
an equivalent number of cores using shared memory.
What explanation do they provide for this difference?
- In Figure 4, let's assign a numerical scale to the X axis such
that "Shared state, one big lock" = 1, "Finer-grained locking" = 2,
"Clustered objects, paritioning" = 3, and "Distributed state, replica
maintenance" = 4. What number would you give OS/161? Why?
- Although OS/161 is a shared memory and not a message-passing
architecture, there are places where a core behaves in a split-phase
manner (section 3.1). Give one example.
- Had you completed assignments 2, 3, and 4 in barrelfish, for each
major piece of functionality: system calls, scheduling, virtual memory,
journaling, in which barrelfish component(s) would the majority of the
functionality reside? If you specify more than one component, explain
how the work would be partitioned.
4. Distributed File Systems (18 points)
You have been hired to design a new distributed file system, TNVFCSDFS (the
new very fast, consistent, simple distributed file system).
It is up to you to decide if you want your distributed file system protocol
to be stateful or stateless. You've decided to poll your customers to figure
out what they value most: performance, consistency, or simplicity (which
is likely to get your product to market more quickly and more reliably).
Your design will depend on what you learn from your customers.
- If your customers say that performance is the most important
criteria, what will you decide? Why?
- If your customers say that consistency is the most important
criteria, what will you decide? Why?
- If your customers say that simplicity is the most important
criteria, what will you decide? Why?
5. Pot Pourri (42 points)
- The answer to the "coming to class on time" question is what?
(Do not stress; this is worth only 2 points.)
- You are the Chief Technology Officer of a startup.
Your company sells a cloud-based service that stores personal account
information including credit card data.
You have been given a budget of one million dollars for this year
to make sure that the data your store is secure.
How will you spend that million dollars? Justify your choice (if you
are going to hire someone, explain what you want that person to do).
- One of the goals of virtualization is performance isolation.
Performance isolation means that anything running on a virtual machine
can't really tell if it's running on a virtual machine or not.
You've been hired as the chief architect for VMsRUs. You senior
development engineer, Chris, is confident that the best way to
achieve performance isolation is to simply partition all the resources on
the physical machine among all the VMs running on it.
Write an email to Chris explaining why you think this is a bad idea.
Your email should be somewhere around 2-4 paragraphs.