A Graduate School Open House: Words from a Student

So you got invited to an open house after you applied for your PhD. Now you get to visit the university, see the faculty and students, and meet the department overall. Here are some pointers that I have picked up over the years, things to ask if you freeze up and can't think, and things to look for when visiting. I'll try to discuss things that are relevant to PhD and Master's students, but I'm currently a PhD student, so these tips may be more applicable to a PhD.

What is grad school?

I've been around for some time, and let me tell you one thing: you are more prepared than when I was looking at graduate schools and programs. When I applied, I essentially Googled “What can you do with a biomathematics degree?” and biostatistics came up and off I went applying. Not a very rigorous or studious way of doing things, but I think it worked out. So what is grad school? For one thing I can definitively say for me: it is not undergrad. My undergrad is not a research institution and I hadn't done any heavy programming or even knew what markdown was (nor LaTeX or R ). I also didn't realize it wasn't common to wear shorts, a t-shirt, and a backwards hat to class. So if you know this, you are more prepared than I was.

School Determination

It's a little late for this, but the first thing you need to do is pick a school that has the program you want. For a Master's degree, that means the program you want to study and at an affordable price. I didn't thoroughly weigh the cost of my education across programs, which I regret. I believe I stil made the best choice, but a more informed financial decision would still have been better.

Questions

Here I'll go through some questions you can ask if you don't feel like you know what to ask when you get there. I'll break them up into overall, for other students, for professors, as these 3 audiences are different in what and how you should ask questions.

Questions to ask overall

These are questions you can, and should, ask in a big group so that other students can hear the responses as well. They may be asked to administrators or the graduate program director, as well as other students.

  1. What are some examples of departmental events?
  2. What resources does the department have for new students?
  3. What resources does the deparmtent have? e.g. a computing cluster, money for students books/a new computer
  4. Do students (which students) get offices (if any)?
  5. How big is the department?
  6. What are the requirements for graduation (comps, orals, dissertation, etc)?

Questions to ask students

These are usually much more informal than those to the professors or staff. The level of informality can inform whether the student body fits well with your personality.

  1. What is student life like?
  2. Does the stipend cover the cost of living?
  3. What neighborhoods do you live in?
  4. How much does rent usually cost?
  5. How much does a cheeseburger and/or beer cost at a restaurant you usually go to? This is a question that can give you an idea of how much the cost of living is.
  6. Do you have/need a car?
  7. What is the public transportation like (do you ride the bus, do you usually cab, etc)?
  8. How much interactions do you have with professors that aren't your advisor/teacher? When I was interviewing at another department, I told one of their students that I was interviewed by Professor X (not Xavier), and they replied “Oh, I don't know who that is.” That told me a lot of how the department operated, and I realized I liked smaller departments.
  9. What student groups exist? This is interesting because it'll let you know how the students interact. For example, we have a computing club, a journal club, and an informal blog meeting.

Questions to ask professors

These are more formal in some respects and allow you to find out what research people are really doing and see if you connect with anyone.

  1. What is the coolest thing you've done recently?
  2. How many students do you have under you, on average?
  3. How often do you meet with your students?
  4. How many classes do you teach a year?
  5. How many students of yours have graduated? This is biased against new professors, but they should let you know that they are new.
  6. How are your students funded? Do they find funding or do you have funding usually available?

Be yourself

The main message I can send is: be yourself. For however cliche it is, being yourself will let you accurately know if you get along with the department or their students. Some personalities just don't go with certain departments or certain professors, but that's not constrained to grad school. It's good to get as much information as the “feel” of the department and if you fit there by the end of the visit.

Like where you live

Lastly, I think the best advice about choosing a department for a PhD was given from a friend of mine: “live somewhere you would like to live for the next 5 years”. I'd say the same thing for a 2-year or 18-month long Master's degree. The department can be great, but liking where you live is a huge component of maximizing happiness while you're in graduate school. And that should be #2 on your list, right after “getting it done”.

Changing the Peer Review Process: Thinking like a 10-Year Old

Abstract

I discuss my idea for another type of peer review process, where there would be a general requirement for needing to review before getting reviewed.

A story about when I was a 10-Year-Old

I couldn't really sleep this morning, so naturally I started thinking about the peer review process. I started thinking about elementary school and when teachers would make us grade each other's homework. When I had to grade my neighbors' assignments and they had to grade mine, I usually thought 3 things: 1) the teacher was too lazy to grade our homework (how wrong I was!), 2) I was happy to get feedback so quickly, and 3) the grading was generally less harsh than if the teacher had done it.

Back to Peer Review

I was thinking about how this idea could apply to a peer-reviewed journal. I think it would be interesting if whenever you submitted your paper, then the only way you get feedback is if you reviewed another paper from that journal. I think this would be interesting for a few reasons.

  1. It would encourage fast feedback from reviewers. (I'll get to how this may be a bad thing).
  2. You'd have a large pool of reviewers.
  3. Although many (if not most) academics reviewing papers are paying forward from getting their own papers reviewed, the pay forward here would be immediate. Also, you wouldn't have any “leechers)” publishing in that journal.
  4. It would promote the idea that one aspect of research is “giving back”.

How to Select Which Papers to Review

You could select from a list of papers that are currently available to be reviewed and pick the one (or more) that you feel qualified to review. You may have to give a reason why you feel qualified (or not). If you didn't feel qualified for any, then you can either pay a fee, or give fields you do feel qualified in and wait for the journal to respond. Editors would still check the paper is in line with the journal's mission and can double check if they don't see a good fit for a paper reviewer.

Caching

I like the idea that you could “cache” reviews so that if you (or a co-author) submitted to this journal (or a network of journals) and had more reviews than submissions, you could simply submit. This would be useful because:

  1. Younger academics (i.e. students) would be encouraged to get into the peer review process earlier.
  2. You may review at any time if you think you have a submission in the future.
  3. You know you still “gained” something from the review above and beyond what you gain from being a reviewer in the current system (new knowledge, seeing other writing styles, etc.)
  4. Having a co-author that reviews a lot of papers may be more desirable to collaborate with (not a great reason, but still gives incentive to review).

Drawbacks

Potential drawbacks for this system obviously exist, such as:
1. Quality of review. You'd run the risk of people trying to review too quickly or reviewing papers they are not qualified for.
2. Using co-authors “caches” more often than giving back, so that only a few are still doing the reviews.
3. Competing papers might get rival authors reviewing them (could be rare, but worth considering)
4. Allowing people to select papers that need to be reviewed may neglect some more-complicated papers. (You could maybe assign these).

Conclusions

The idea may be not really hashed out, and there are probably unforeseen problems, but I think this system would be interesting to try out, if it's not being used already. (Incidentally, if you know of a place where this system is used, links/comments are welcome!). The peer review process would be even more of a “scratch my back, I'll scratch yours” situation. Also, it would give direct incentives for reviewing, which I believe is a good thing.

Changing Terminal Tab Names

So I was looking around how to change Terminal tab names. I want the tab name to change to the current working directory if I'm on my local system and to “Enigma” if I'm on our cluster host computer and “Node” if I'm on a cluster node.

After some tweaking, I found a solution that I like.

In my ~/.bashrc file, I have:

function tabname {
  x="$1"
  export PROMPT_COMMAND='echo -ne "33]0;${x}07"'
  # printf "\e]1;$1\a"
}
### changing tab names
tname=`hostname | awk '/enigma/ { print "Enigma"; next; } { print "Node" }'`
tabname "$tname"

which essentially just does a regular expression for the word enigma using the hostname command. It then assigns this to a bash variable tname and then tabname assigns that tab name.

In my personal ~/.bashrc, I added:

function tabname {
  x="$1"
  export PROMPT_COMMAND='echo -ne "33]0;${x}07"'
  # printf "\e]1;$1\a"
}
### changing tab names
tname=`hostname | awk -v PWD=$PWD '/macbook/ { print PWD; next; }'`
tabname "$tname"

so that when I'm on my macbook (change this as needed for your machine), it will have the working directory as the tab name. Now, yes, I know that Terminal usually puts the working directory in the window name, but I find that I tend to not look at that and only tab names.

Now, you can combine these to have:

tname=`hostname | awk -v PWD=$PWD '/enigma/ { print "Enigma: " PWD; next; } { print "Node: " PWD }'`

if you want to describe where you are on the cluster.

Here's the result:

Tabs

This worked great on our cluster, but remained when I exited an ssh session, so I'm still tweaking. Any comments would be appreciated.

Faster XML conversion to Data Frames

Problem Setup

I had noted in a previous post that I have been using the XML package in R to process an XML from an export of our database. I used xmlToDataFrame to change from an XML set to an R data.frame and I have found it to be remarkably slow. After some Googling, I found a link where the author states that xmlToDataFrame is a generic function and if you know the structure of the data, you can leverage that to speed up the function.

So, that's what I did for my data. I think this structure is applicable to similar data structures in XML, so I thought I'd share.

Data Structure

Let's look at the data structure. For my data, an example XML would be:

<?xml version="1.0" encoding="UTF-8"?>
<export date="13-Jan-2014 14:08 -0600" createdBy="John Muschelli" role="Data Manager">

  <dataset1>
    <ID>001</ID>
    <age>50</age>
    <field3>blah</field3>
    <field4 />
  </dataset1>
  <dataset2>
    <ID>001</ID>
    <visit>1</visit>
    <scale1>20</scale1>
    <scale2 />
    <scale3>20</scale3>
  </dataset2>
  <dataset1>
    <ID>002</ID>
    <age>40</age>
    <field4 />
  </dataset1>  
</export>  

which tells me a few things:

  1. I'm XML (first line). There are other pieces of information which can be extracted as tags, but we won't cover that here.
  2. This is part of a large field called export (a parent in XML talk I believe) (second line)
  3. Datasets are a child of export (it's nested in export). For example, we have dataset1 and dataset2 in this export.
  4. There are missing data points, either references by <tag></tag> or <tag />. Both are valid XML.
  5. Not all records of the datasets have all fields. The second record of dataset1 doesn't have fields field3 but has field4.

So I wrote this function to make my data.frames, which I found to be much faster for conversion for large datasets.

require(XML)
xmlToDF = function(doc, xpath, isXML = TRUE, usewhich = TRUE, verbose = TRUE) {

    if (!isXML) 
        doc = xmlParse(doc)
    #### get the records for that form
    nodeset <- getNodeSet(doc, xpath)

    ## get the field names
    var.names <- lapply(nodeset, names)

    ## get the total fields that are in any record
    fields = unique(unlist(var.names))

    ## extract the values from all fields
    dl = lapply(fields, function(x) {
        if (verbose) 
            print(paste0("  ", x))
        xpathSApply(proc, paste0(xpath, "/", x), xmlValue)
    })

    ## make logical matrix whether each record had that field
    name.mat = t(sapply(var.names, function(x) fields %in% x))
    df = data.frame(matrix(NA, nrow = nrow(name.mat), ncol = ncol(name.mat)))
    names(df) = fields

    ## fill in that data.frame
    for (icol in 1:ncol(name.mat)) {
        rep.rows = name.mat[, icol]
        if (usewhich) 
            rep.rows = which(rep.rows)
        df[rep.rows, icol] = dl[[icol]]
    }

    return(df)
}

Function Options

So how do I use this?:

  • You need the XML package.
  • doc is an parsed XML file. For example, run:
doc = xmlParse("xmlFile.xml")
  • xpath is an XPath expression extracting the dataset you want. For example if I wanted dataset1, I'd run:
doc = xmlParse("xmlFile.xml")
xmlToDF(doc, xpath = "/export/dataset1")
  • You can set isXML=FALSE and pass in a character string of the xml filename, which just parses it for you.
xmlToDF("xmlFile.xml", xpath = "/export/dataset1", isXML = FALSE)
  • usewhich just flags if you should use which for subsetting. It seems faster, and I'm trying to think of reasons logical subsetting would be faster. This doesn't change functionality really as long as which returns something of length > 1, which it should by construction, but maybe speed up the code for large datasets.
  • verbose – do you want things printed to screen?

Function Explanation

So what is this code doing?:

  1. Parses the document (if isXML = FALSE)
  2. Extracts the nodes that are for that specific dataset.
  3. Gets the variable names for each record (var.names)
  4. Takes the union of all those variable names (fields). This will be the variable names of the resultant dataset. If every record had all fields, then this would be redundant, but this is a safer way of getting the column/variable names.
  5. Extract all the values from each field for each record (dl, which is a list).
  6. For each record, a logical matrix is made to record if that record had that field represented in XML.
  7. A loop over each field then fills in the values to the data.frame.
  8. data.frame is returned.

Timing differences

Obviously, I wanted to use this because I think it'd be faster than xmlToDataFrame. First off, what was the size of the dataset I was converting?

dim(df$df.list[[1]])
# [1] 16824 161

So only 16824 rows and 161 columns. Let's see how long it took to convert using xmlToDataFrame:

    user   system  elapsed 
4194.900   93.590 4288.996 

Where each measurement is in seconds, so that's over 1 hour! I think this is pretty long, and don't know all the checks going on, so that may not be unreasonable to those who have used this package a lot. But I think that's unscalable for large datasets.

What about xmlToDF?

   user  system elapsed 
225.004   0.356 225.391 

which takes about 4 minutes. This is significantly faster, and makes it reasonable to parse the 150 or so datasets I have.

Conclusion

This function (xmlToDF) may be useful if you're converting XML to a data.frame with similar structure from XML. If you're data is different, then you may have to tweak it to make it fit your needs. I understand that the for loop is probably not the most efficient, but it was clearer in code to those I'm writing for (other collaborators) and the efficiency gains from using this function over the xmlToDataFrame were enough for our needs.

The code is hosted here. Also, you can use this function (and any updates that are made) by using the processVISION packagea:

require(devtools)
install_github("processVISION", "muschellij2")