Alaska Genealogical Resources

Here’s a link for the Alaska State Library’s genealogy resources. They even have a WorldCat link so you can search for other libraries with the same materials. Some of these materials are also available at Seattle area archives where I’m a researcher available for hire.


Facebook Page for Sea Genes

Here’s the link to the Facebook Sea Genes Family History & Genealogy Page. Like it, please. Thanks!

Sanity Checking with Multiple Genealogy Programs

There are some people, like me, who are not quite satisfied with just one program for genealogical purposes. I use several, and keep an eye out on the others for features that might suit me well.



LifeLines (Photo credit: Wikipedia)

is one of the programs I use on a semi-regular basis. It is an old program (console window, anyone?) and has a long history of strong development by the maintainers. Thomas Wetmore originally wrote it back in the olden days of Unix and DOS, but it’s still around.

One of the things I like about Lifelines is it’s powerful scripting language. This language takes a bit of getting used to but once you know it, it seems intuitive. The program comes bundled with a lot of scripts, some better than others, and some near-duplicates of others. The verify script is one of the most powerful sanity-checkers on the market (did I mention that Lifelines is free?). Some of the things it reports on are age boundaries (birth, marriage, and death), multiple marriages, kids out of order, and so on. Several scripts check for people who might be in the Social Security Death Master Index. Another script called, weirdly enough, zombies, looks for people who don’t have death items (death, probate, burial, and so on).

I ran verify recently on a 5500+ person database and it came up with nearly 1600 items that it thought were interesting and that fell outside of user-programmable boundaries. It’s not for the faint-of-heart to look at this report as it can be a lot to digest. The nice thing about the report is that when I go through it, item by item, I can tighten up the quality of the data on a semi-regular basis, and gain a semi-regular consistency for the entire database. It might take years to finally go through the entire list and complete each item, but knowing about these items is the important thing.

Like verify, the zombies script reads through the database and plucks out those that have death items. This report is much simpler, and sortable so you can find the people by year, instead of in database order. The great thing about this report is that you find out who is in the database that is not marked as dead, dead, dead, as in dead. The script doesn’t consider the deceased flag, if there is one on the person, it makes you think about getting the details, and you’ll want to go out and get the details right away.

If you’ve added a lot of what I call “the moderns” you’ll want to run one of the SSDI check scripts and follow up on a visit to the Death Master Index on your favorite online site that has one. I used to use the one at, but removed it to their own site for some reason. Shucks, the Rootsweb version was better, IMHO.

Enough about the great Lifelines scripts. Multiple programs for genealogical data analysis are a must if you are serious about the pastime. Knowing what’s good data and bad is a good idea, as well as ethically correct. My other genealogy programs include an old version of Legacy, and a current version of The Master Genealogist.

TMG is the one I use on a regular basis as it is almost as powerful as Lifelines in the analysis and reporting facets. The only drawback to TMG’s reporting is that it’s not as flexible and programmable as Lifelines. Legacy, on the other hand, even though my copy is quite dated, is pretty good at picking out bad data, too. Even though I haven’t used Legacy for a while, like Lifelines, I keep it around as a variant finding tool.

T4G: Punctuation and Text Formatting

Hyphens are punctuation, a part of the text; en and em dashes are not, they are formatting marks. I’ll talk a little about the differences and genealogical applications of each. A brief resources section to highlight significant sources used in this article is also given.

The hyphen, en, and em dashes discussed here are part of the standard font package. The hyphen is in the Basic Latin section and the other two are found in the General Punctuation part of the font’s special characters listings.

Punctuation and Text Formatting


Hyphens are punctuation, a part of the text. In the old days of the typewriter and early days of the computer, hyphens were doubled and tripled to substitute for dashes. This is unnecessary now as we have proper dashes available. The hyphen is also distinct from a minus sign, but mathematical expressions occur only rarely in our type of writing.

En Dash

En dashes are what Bringhurst (see resources section) calls analphabetic characters. His thought about the handling of them is different from traditional usage. The differences he considers significant take into account more languages than English, which most fonts are designed for.

In genealogical writing, the en dash is the strongest visual indicator for date ranges. En dashes are meant to separate the two ends of a range such as 1582–1752. Some textual terms can also benefit from its use. En dashes emphasize a separation between a prefix and a word in a compound term such as post–1945, or pre–marriage.

Em Dash

Em dashes separate thoughts. They represent missing data in some cases as in unknown surnames (—?—).

In terms of formatting, there are several micro-stylistic thoughts to consider. One is how much spacing there should be around the em dash.

Bringhurst would have us use spaces around the en dash as an alternative to the (subjectively) lengthy em dash as in “… – …”. Doing this would lead to putting a non-breaking space between the last characters before the en dash to keep the two together, possibly affecting a text’s justification.

One of the faults with Times New Roman is that the em dash is too long. Most professionally designed fonts compensate for the length of the em dash by making the capital M a more realistic width. Times New Roman was designed for a specific purpose: newspapers, and should only be used by that type of publication. Linux Libertine, on the other hand, was designed for more common publications such as this one, and books, so its readability is greater.

Illustration: Linux Libertine and Times New Roman em dashes

Illustration: Linux Libertine and Times New Roman em dashes

Hatcher, and Leclerc and Hoff (see resource section for both), differ on whether there should be spaces around an em dash in text. I would prefer the latter, and include the spaces. Doing this also requires that you pay attention to justification and word breaks at the beginning, so the dash doesn’t sit by itself at the beginning of a line.

My own thought on doubling or tripling the em dash for missing names is that it’s unnecessary. A triple dash, or in Unicode terminology a “horizontal bar” (―) can stand in. It is shorter, and more representative of the strong emphasis necessary. I prefer to denote missing data with just an em dash or as (—?—) [opening parenthesis em dash question mark em dash closing parenthesis].

Dumb and Curly Quotes, Redux

Using real quotes (curly “ / ”) raises the tone of what we read. It’s also what we’re most brought up to see in printed published materials. Online it’s another matter, though, since most early computer systems couldn’t handle curly quotes and kept the dumb quote from the teletype repertoire.


The Chicago Manual of Style, 15th ed., Chicago, Illinois: University of Chicago Press, 2003.

Robert Bringhurst, The Elements of Typographical Style, version 3.2, Point Roberts, Washington: Hartley & Marks, Publishers, 2008. See in particular chapter 5 “Analphabetic Characters.” on punctuation and textual markup.

Patricia Law Hatcher, Producing a Quality Family History, Salt Lake City, Utah: Ancestry, Inc., 1996.

John D. Lamb, Notes on OpenOffice Writer: Large and Complex Documents, n. p.: n. pub., 2009. Available online at the author’s home page. See in particular Chapter 2 “Characters, Fonts and Highlighting,” on the details of the characters and their handling.

Michael J. Leclerc and Henry B. Hoff, eds., Genealogical Writing in the 21st Century, Boston, Massachusetts: NEHGS, 2006.

Peter Wilson, A Few Notes on Book Design. Normandy Park, Washington: The Herries Press, 2009. Available online at the LaTeX archives . See in particular, chapter 5 “Picky Points,” on punctuation and textual markup.

© N. P. Maling — Sea Genes – Family History & Genealogy Research

Prosopography — Inductive and Deductive Uses

I’ve been thinking of starting a prosopography project. The reason I want to do it is to find out who the Mellen folks William Barry said “lived on the fields …”. The problem is that this is a deductive project while a prosopography is an inductive project.

What’s the difference? Inductive research is developing facts or information from the specific to the general. Deductive research means generating facts from the general to the specific. Barry’s register of Framingham, Massachusetts residents is a deductive, or inclusive listing, working from the general, a place, to the specific, who lived in that place. If I were to go the opposite direction, I’d end up with how many people, of what ages, and so on, lived in Framingham.

If I were to build a database of all the folks listed in Barry’s genealogical register, and add a different data set, like the vital records set (the tan book), I’d have a base set of data for generating inductive statistics. These two sets would act like a census enumeration. From them, I’d be able to separate names, birth, marriage, and death dates, what they did, and so on. I’d then have a basis for guessing, or actually determining, who he didn’t include in his register. This is inductive: from the specific to the general. The difference is that instead of numbers, I’d have some possible names of people to research.

Going further, by adding land records, wills, and other such records to the mix, I’d be able to determine more specifically, who lived where, based on proximity (the land boundaries), and relationships (the wills). These two sources are more specific (and primary) than, just the derivative genealogical register and vital records (it is a compiled [secondary] source, but “official”).

These four sources, a register, the vital records, the wills, and the land records, are a good start for building a universe of people at a given place during a given time. The data are specific, measurable, attainable, realistic, and time-limited (SMART). They are very specific about who, what, where, when, why, and how (the five Ws and H). From this data one could start querying in a general sort of way to find out who were the elite and who were the lower classes for sociological purposes; who were well-off and who weren’t for economic purposes; who lived longer and who died young for medical purposes; and so on. These are the classic goals of such a prosopographical study.

A further idea for a data study of Framingham residents would be to glean at least part of the data from Robert Charles Anderson’s Great Migration (GM) project books. This is a massive data set which was done as a sort of prosopographical study. Whether I’d be able to query the data the hard way from the books is one question I’ve not answered yet, though. The GM study covers immigrants over an earlier period than Framingham’s existence, for one thing. The descendants of those listed in that study, however, may be notated as living in Framingham, which is where their most valuable contribution from the GM study lays. It is worth looking into already done projects, such as this one, before embarking on your own time-consuming database project.

Social networking research among these data is also possible. From such a genealogical database, you’d be able to find connections to an ancestor’s neighbors, business associates, and extended family members. Using a program such as The Master Genealogist (TMG), you’d be able to tag all of the data in such a way as to find, from TMG’s witness and associate view screens, who knew whom.

Building a prosopographical database using TMG first, and not using a product such as Microsoft Access or other general-purpose database makes sense. The increased value of the data from the get-go expands the possibilities of its use from just a specific researcher to a global audience. TMG offers comma-separated value (or tab-separated value) export files, so just a subset, or the entire database, could be exported for use in a general-purpose database product or spreadsheet, which is what works best for statistical analysis of inductive data.

TMG’s data analysis, however, works best for genealogical purposes. You can make groups of persons for further research and/or tagging as a particular demographic, for instance. The to-do feature allows you to gather in one place any and all data for future research into a family or a group. Try that in a general-purpose database and you’d be using a totally different program to hold the to-do data. By keeping the database and metadata (sources, to-dos, etc.) intact, you are better off with your research as there is no possibility of its loss by separation from its source.

While prosopography is not genealogy and genealogy is a subset of a prosopography, the data is pretty much the same. The primary difference is that prosopography does not make the person-to-person connections that genealogy does, reducing its usefulness to a genealogist. The data from a genealogy project, on the other hand is extremely useful to a prosopographer.

To get back to the Framingham project, the goal: “find out who lived in the fields below …”, becomes easier since the relevant data is in one source data set or another. Knowing who these people are from extracting their information from the overall data set makes them easier to research as the significant starting data is now available (their names, and birth, marriage, and death dates). From all the data a new genealogical project emerges. The possibility then arises, that from all the data could emerge a new, expanded, and perhaps, more accurate, local history of Framingham emerges.


Find out more about prosopography at the Prosopography Portal.

Find William Barry’s history of Framingham at the Internet Archive or at Google Books.

Find the Framingham vital records at the Internet Archive or at Google Books.

Find out more about the Great Migration project at its website.

Find out more about The Master Genealogist at Wholly Genes’ website.