While this week’s readings were fairly diverse, I felt the major theme stretching across them was the important relationship between design and content. According to the results of the Stanford Credibility project, a webpage’s design and appearance are, in many cases, more important than its actual content in determining its perceived credibility to the average user. Nor is this phenomenon limited to the web, as Hagen and Golombisky make the same point, if less explicitly, in their discussion of design considerations for print media.

Clearly then, any website that hopes to achieve credibility, not to mention actually attracting and keeping visitors, must consider its design and aesthetics as a element almost as important as its content. Additionally, as pointed out by Hagen and Golombisky, this design should compliment the content rather than just provide a pretty interface for the cutting and pasting of content. Design and content must be developed side by side, not independently and joined together in the final product.

Perhaps the most interesting element of the intersection of design and content is provided by Donald Norman in his article on why “Attractive Things Work Better.” Norman argues for the use of design to engage the overlapping of the three levels of brain processing: visceral, behavioral, and reflective. Essentially, by using to design to engage a visceral reaction we can improve the reflective processing of the content. For websites, the implication of this is that a design that provokes a positive affective response will encourage the curiosity, creativity, and learning ability of viewers (resulting in prolonged visits and increased understanding of content). An important consideration for the design of websites (and other media), this also perhaps partially explains the results of the Stanford Credibility Project.

But while website designers will primarily want to focus on achieving positive affect, in certain circumstances the use of negative affect is also useful. Negative affect, according to Norman, produces high focus and concentration (often to the point of tunnel vision), which may be more desirable to designers for some tasks than the creativity engendered by positive affect. Norman’s example is of the monitoring station at a nuclear power plant, which uses positive affect (attractive, pleasant environment) during normal operations but imposes negative affect (alarms) to achieve increased focus. This also has important implication for designs that are intended to be used in situations with externally introduced high stress. In these cases, rather than designing to introduce affect, designers need to focus on accounting for the visceral effect of outside factors. The designer of a Heads-Up Display for a fighter aircraft, for example, should concentrate on making the display as simple and intuitive as possible, so that even during the tunnel vision resulting from high stress combat pilots will be able to quickly and effectively use the display.

A second major theme engaged by this week’s readings is the permanence or impermanence of the web. Despite the efforts of projects like the Internet Archive and its Wayback Machine, millions of websites have been written over or deleted. More importantly, even those that have been saved can be difficult to find due to the problems with navigating Web Archives. While not as a big a concern for our projects this semester as the intersection of design and content, this obviously holds huge implications for the future of our profession as historians. While historians increasingly use the web for research and citations, and digital historians especially argue for the increased transparency of using links in footnotes, the credibility of this methodology is called into question when these links are dead, or more dangerously, overwritten with new material. Thankfully, perma.cc provides at least a partial solution to this problem by saving a permanent link to the website as cited. Yet this solution suffers from the fact that its viability is dependent on the continued existence of a third party institution, and also will obviously only work for links to digital scholarship, and does nothing to protect websites listed in footnotes of print works.


Comment on Jordan’s post #1: Credibility of Design


For my final project I worked on mapping Army posts in post-Civil War America. This project stemmed from my wider research interests, and I hope to use the insights gained in this project to further my dissertation research moving forward. For my dissertation I am looking at the professionalization of the Army in the last half of the nineteenth century. One of the factors often cited as part of this professionalization is the consolidation of units onto a smaller number of larger bases, allowing for both economies of scale in fatigue and guard details and the ability to train soldiers to operate as larger units. With this in mind, I wanted to create a map to help visualize the shift both in number of posts and the greater troop strength at each.

Like all projects, this one first had to start with sources. For this project to be effective I had to have a pretty complete list of Army posts as well as their assigned strengths and locations. Additionally, I would have to have this same information over the course of several decades in order to show change over time. While I did not have an existing source base at the outset of the project, I had a pretty good idea of where I could find the information from previous research. On an earlier research paper I worked with the “Annual Report of the Chief of Ordnance”, and I knew that this report, along with those from all of the other departments, was compiled annually and submitted as appendices to the “Annual Report of the Secretary of War to Congress”. A quick google search found the one year (1877) that had been digitalized by Google Books. Contained within the appendices was the report of the Adjutant General, which included a table listing all Army posts by name, location, commanding officer, units assigned, and strength (broken down in more detail than I needed for the parameters of this project). Unfortunately, only one year of the Report has been digitalized, so I had to make a trip to the Army Heritage Center at Carlisle Barracks to get the data for the other years. However, since I was already able to look at the 1877 report, I showed up knowing exactly what I was looking for and exactly where to find it, so it took me only a half day to pull the data from 30 years of Annual Reports.



Once I had the data, the next step was putting it in a format that would be compatible with the mapping program. My initial plan was to do a map using Google Map Engine, as it has by far the easiest user interface. I intended to do a layer for each year, and size the point markers by the number of troops at that post. However, there is no real good way to make the size of the marker correspond with troop strength, and I would have had to create each point manually, one at a time. For these reasons, I decided to go with Palladio instead. Palladio is a little less user friendly, and is still in development and thus suffers from the occasional bug or idiosyncrasy. However, it does allow you to automatically have your points sized by either frequency or an independent value (troop strength in my case), and you can upload all of your points from a spreadsheet rather than creating them one by one.

While I thus didn’t have to individually create each point, in order for them to populate to Palladio’s map I needed my data to include coordinates for each post. The table in the Annual Report did list each Fort’s location, but only by textual description. Some of these were pretty straight forward, such as “6 miles from Omaha,” while others were more obscure such as “at the fork of the Concho river” or relatively vague such as “in the Owen River Valley.” Obviously this was not going to work for Palladio, so I had to do some additional research first. While time consuming, this was relatively easy. Many of these Forts have become modern towns, and thus show up in a quick Google Maps search (right clicking on the map and choosing “what’s here” will give you the latitude/longitude of any point on Google Maps). I was also pleasantly surprised at the coverage of these Forts in Wikipedia; almost all of these Forts, even those that were only active for a few years and never held more than 100 people, have their own Wikipedia entries. And almost all of those entries contain coordinates, and of those that didn’t most at least included a reference to location tied to a modern town allowing me to Google Map them.

Now armed with coordinates, I built a spreadsheet combining all the data needed for upload to Palladio, specifically Place Name (the title of each post), Coordinates, Strength, and Date (really year, but Palladio requires yyyy-mm-dd format so I used the date of the Annual Report). I did tabs for each year, with a consolidated tab with all years for upload. For the years I chose a representative sampling. While at the archives I pulled the data for all years between 1866 (end of the Civil War) and 1896. However, I quickly decided to start my mapping project in 1876, which marked both the end of Reconstruction (with troops posted throughout the occupied South) and the height of the Indian Wars (the Battle of Little Big Horn, where Custer’s command was wiped out, occurred on June 25/26, 1876). As I began data entry I also realized how time-consuming including every year even during this reduced period would be, and so reduced my table to a sampling of years. I included every 5 years (1876, 1880, 1885, 1890, 1895) for periodic references and then chose several sets of paired years tied to major events. 1876/1877 show the end of Reconstruction, 1880/1881 mark the initial Congressional approval of the Army’s proposal to reduce the number of bases, and 1892/1893 mark what is generally considered the end of the Indian Wars (although clearly no one knew this at the time).

It was at this point in the project that I ran into my first problem. Stupidly, I did not look closely at Palladio before beginning to build my spreadsheet. My coordinates were in degrees, minutes, seconds format (the format that Wikipedia uses) while Palladio requires coordinates to be in decimal format. Rather than back through and manually reenter each coordinate (Google maps gives you both, if you click on the coordinates in Wikipedia it will take you to geohacks, which allows you to see the point using any number of maps, including Google Maps), I decided to attempt to convert them all in excel using a formula. This was harder than I expected at first, and I had to fight through that for a bit. This involved first breaking up each coordinate into individual latitude and longitude columns, putting them in an hh:mm:ss format, multiplying them by 24, and then recombining them into a single column. The end result was that I (finally) had my coordinates in a Palladio-friendly format (although I am still not sure I wouldn’t have been able to do it faster by just reentering them manually).

I now had everything I needed to run the data through Palladio. Here I ran into my second problem. I copied and pasted my data into Palladio, and it showed up perfectly fine on the Data screen. However, when I went to the Map, no points appeared. I tried re-uploading several times, attempted in in three different browsers, and even tried it on a Mac (I’m a PC user as most government systems only work on Windows). Nothing worked. I went back through Palladio’s FAQs and guide for mapping, and I was doing everything right. The points should have been there. Just when I was ready to throw my laptop into the wall, I noticed a single point at the far right of the map (in the Mediterranean). I scrolled the map over, and there were all of my nineteenth-century American army posts….spread across the Middle East and Asia. After a moment of panic that all of my data was clearly wrong, I realized that when I had converted the data into decimal format the E/W had been dropped and all of my Longitudes were positive values when they should have been negatives. After quickly fixing the spreadsheet and re-uploading, all of my forts showed up where they were supposed to be (except Fort Hamilton, which was hanging out off the coast of Spain because I missed a digit in its coordinates…quickly fixed).

The mapping function of Palladio gave a pretty good visualization of the shirinking number of post and the increased size of individual garrisons, but I had to play around with how I used it to get the most value out of it. I had a vision in my head of pressing a play button and watching my map change as posts disappeared and the remaining ones grew larger and larger, but Palladio does not have that level of functionality. The mapping visualization and the change over time visualization are largely divorced from each other; the map shows static data while the Timeline feature allows you to see how various elements of your data changed over time. Hopefully future versions of Palladio will add the ability to have the map show change over time, I can’t imagine my project is the only one that would be useful for.

After using different combinations of data and functions, I decided that the best visualization of my data for my purposes was to actually make use of three separate maps/data sets:

For the first map I included all data for 1876-1896, and sized the points by frequency of appearance of place name. This essentially creates larger dots for the Forts that stayed active longer, with smaller dots for Forts that closed. I also used the Timeline feature, showing how the # of forts decreased over time.



The two other maps are single year maps, one for my starting year of 1876 and one for the final year of 1896. In these maps of have the points sized according to “Strength” column, so that the size of each Fort is represented on the map based on how many soldiers were assigned there. Looked at side by side these two maps give a pretty clear visualization of how the number of Forts decreased while their garrisons increased.





So despite the lack of a timelapse map view, I was able to get out of Palladio essentially what I hoped to achieve. Seeing the data visually, in both the Map and Timeline functions, gives a much more intuitive and emphatic picture of the change in the frontier army over the last part of the nineteenth century. From my start point in 1876 to my end point in 1896 the Army went from 163 posts to 78 in 1896. The size of each post also drastically changed. In 1876 there were only 2 posts with over 500 soldiers and there were 96 (more posts than existed in 1896!) with under 100 soldiers. By 1896, 20 Forts, over 25% of active posts, held over 500 soldiers and only 6 held less than 100 soldiers.

The project also revealed how fluid these posts were, and revealed patterns related to the larger events of the era. Forts would appear, disappear and reappear in response to the campaigns against the Indians or other threats. In 1876 a number of posts still existed throughout the South, enforcing Reconstruction. In 1896 there were additional temporary posts, many containing several hundred troops, in the Northeast in response to labor unrest.

In addition to helping me visualize the changing nature of the Army in the late nineteenth century, I also learned quite a bit about the process of digital history, both in the specifics of Palladio and more generally. First is how much of your time will be spent in the simple drudgery of data entry. I probably spent 80% of my time building my spreadsheet of data and formatting it to be compatible with Palladio; my actual interaction with the digital tools was by far the smallest part of the project. Of the time I spent working with Palladio, probably fully a half was fighting through how to get it to do what I wanted, which left only 10% of my time to actually exploring what Palladio showed me about my data. Still, I was able to get a final product that helped me visualize the data, and even if the results were unsurprising, the visual certainly conveyed it with more emphasis than possible just from looking at the numbers on my spreadsheet. Now I can just hope that the next iteration of Palladio includes a timelapse function for the map.

The concept of today’s students as “digital natives” provides the major theme of this week’s readings. This concept, essentially that the younger generation has grown up immersed in technology and is inherently comfortable using it, has proved surprisingly resilient, especially among those who are members of an older generation and learned how to use various digital technologies later in life (“digital immigrants”). Nor is this concept confined to the educational profession; in the military senior officers frequently talk about how young soldiers “grew up on video games” and are those comfortable using advanced military technologies such as drones and remote controlled bomb robots.

Yet, as explored in many of this week’s articles, this concept conceals some important caveats and issues. Danah Boyd engages with these issues most directly, but criticisms of this concept are present, implicitly or explicitly, in almost every article we read this week. Boyd’s criticisms rest on two different but related levels. First, Boyd argues that teens will not become critical contributors to the digital world simply because of the date of their birth and that the ability to navigate facebook and twitter do not imply a mastery of more complex digital tools. Teens will need to be educated to achieve media literacy and technical skills. Furthermore, these two attributes must be accompanied by access, which leads to Boyd’s second criticism. Boyd (and many others) argue that the concept of “digital natives” conceals significant amounts of digital inequality within the younger generation, tied largely to access, which is in itself largely tied to socioeconomic status.

Adam Rabinowitz, in his review of his experience teaching an undergrad course heavily focused on digital tools, likewise criticizes the universality of the digital native concept, pointing out that “students can be avid users of facebook and consumers of Youtube videos and still find it very difficult to use new, often complex or non-intuitive digital tools in a classroom setting” and that students largely prefer their digital content in smaller, manageable doses than is often assumed by educators. Allison Marsh also agrees with this assessment of student skills and interests, claiming that students are not yet convinced by digital humanities and many simply want to be “regular historians.” Mills Kelly, while generally more positive on the digital skills and engagement of students, does agree with Boyd that the profession needs to take a more forward approach to teaching students how to use digital tools.

While these readings do not all completely agree on their assessment of teaching in a digital world and the concept of “digital natives” it seems clear that a more nuanced understanding of the access, media savvy and technical skills of students is critical to the proper teaching of, and with, digital tools. Educators must recognize both the variance within the generation as well as the fact that the prevalence of technology and the use of social media such as facebook and twitter does not necessarily equate to an automatic or intuitive understanding of either the conceptual possibilities or nuts and bolts working of academic digital tools. In order for students to properly understand and implement these digital tools they must still be taught their use, just as we currently teach more traditional historical skill sets such as engaging with historiography and structuring an argument (and writing the papers that Mills Kelly is so critical of).

This weeks readings brought in an issue that touches on much of what we’ve dealt with this semester but that we have not yet addressed. For all the talk of digital media “democratizing” history, the expanded access that pushes this democratization requires the navigation of copyright law. While print historians are largely familiar with the concept of “fair use” without, in many cases, even being familiar with the term, digitization brings in additional concerns, as it does in many other areas. The ability of digital media to rapidly include vast amounts of documents and other media that creates much more copyright concerns than that dealt with by print historians.

The most interesting aspect of this weeks readings, for me, was the section of the embargoing of dissertations. Trevor Owens and Rebecca Anne Goetz raise some very good points about the issues of the AHA statement, and the general benefits of open access. Their articles were especially convincing on the larger issue of tying academic standing to the requirements of for profit press and limiting the focus of academia to scholarly monographs. Despite the strong arguments of Owens and Goetz, and the fact that I think the benefits of embargo vs. open access are still very undecided, it should be noted that the AHA statement does not disavow or call for the elimination of open access, it simply calls for the decision of embargo or open access to be made by the dissertation’s author rather than a unilateral decision by the university. This seems like a rather reasonable request, one that it is hard to argue against even if you believe in open access.

Finally, I thought the Creative Commons Licenses were an intriguing compromise between copyright and open access, although it would be interesting to see some evaluations of its usefulness outside of its own commercial website.

I felt that this weeks’ discussion did a pretty good job of covering the readings and drawing connections between them. Our question were relatively effective in steering the conversation and engaging the class in the major themes of the issue of digital scholarship, although some questions were more effective than others in driving an opened ended conversation than others and, unsurprisingly based on previous class discussions, we answered several questions before we got to them based off of where the discussion went from earlier questions.

Several key issues emerged, some of them new to this week but many echoing themes we’ve dealt with less explicitly earlier in the semester. One of the central topics that threaded throughout the discussion was what make “digital scholarship” and how it relates to the more traditional markers of scholarship as manifested in books, and whether those markers and values are still valid in a digital age. Closely related to this is the issue that digital scholarship is facing in being accepted within the academy, especially within hiring and tenure decisions. The consensus, unfortunately, is that what digital scholarship is capable of doing and what the academy wants don’t mesh well. It does seem that digital history is slowly gaining ground and more jobs, although outside of specifically digital positions the rest of the academy is largely unaware and unconcerned with digital scholarship, even at institutions (like George Mason) with strong digital programs.

While much of the class discussion covered topic I had already thought of when I was preparing for class, several issues were raised that I hadn’t considered. Most notably was the point, drawn from Melissa Terras’ article, of whether historians need to pay attention to how others are using their work…and whether that should drive our future research. While I don’t think we should blindly follow our audience, certainly their is likely some value in paying attention to how our readers interact with our work and if that raises any questions we hadn’t considered. An additional important point, one that is so simple that it is easy to overlook, is the fact that open review (and other digital tools) are such a change from traditional tools that historians struggle to even understand how to use them.

I think the overall discussion was useful in exploring the issue identified above, and provide a good foundation for thinking about digital scholarship. I think in retrospect this discussion of the abstract issues of digital history might have been useful earlier in the semester, but having the practicums out of the way early to allow us to work on our final projects is probably the best structure for the course.

This weeks readings engage directly with many of the issues that have been implicitly raised in previous weeks. While covering various different topics and with significantly differing interpretations, all of this weeks articles are primarily concerned with how to do scholarship on the web, and more fundamentally, whether these new digital tools and mediums alter the basics of scholarship. Broadly, and perhaps overly simplistically, speaking, these articles, and many of those we have read in earlier weeks, attempt to reconcile digital methods and scholarships in one of two opposing ways. Some authors argue that digital media are capable of producing serious scholarship that is different from, but serves similar purposes as that of traditional scholarship. Thus, William Thomas presents digital scholarship as translating “the fundamental components of professional scholarship—evidence, engagement with prior scholarship, and a scholarly argument—into forms that took advantage of the possibilities of electronic media.” This views digital scholarship as analogous to more traditional monograph-based scholarship, but providing additional ways and forms of presenting this scholarship in digital ways that traditional books are incapable of doing. Digital scholarship thus provides a fusion of form and content that is new, but with the same fundamental elements and purpose as earlier scholarship.

A more radical view of digital scholarship argues that traditional definitions of scholarship or “serious history” are constructed around the strengths and weaknesses of the book (“monograph culture” as Edward Ayers terms it), and as such are not a valid universal set of definitions and practices. Thus the rise of digital scholarship presents a fundamentally new way to approach history and scholarship, invalidating or at least questioning definitions of what constitutes scholarship that are based on a single medium (the scholarly monograph). Digital scholarship, instead, freeing historians from the “fascist authority of the format” in the inflammatory words of Tim Hitchcock.

While vastly different in their approach and assumptions, both of these schools of thought argue for the validity of digital scholarship as intellectually credible and valuable history. However, neither of these views of digital scholarship has fully convinced the academy, which remains largely grounded in the traditional “monograph culture.” This, as pointed out by Alex Galarza, Jason Heppler, and Douglas Seefeldt presents significant risks to those who choose to study and produce digital scholarship, as it is largely discredited in the hiring and tenure decisions of many institutions that still hold the written dissertation/monograph as the sole acceptable scholarly product. Until the larger academy reforms it view of digital scholarship and translates that into a wider acceptance of digital scholarship in hiring and tenure decisions, neither of the above arguments will likely gain much traction.

Despite Adam Chapman’s hope that “by no few deny that contemporary game series like Civilization or Assassin’s Creed constitute history,” the validity of games (even less commercial ones such as Pox and the City) as history, and specifically historical scholarship, remains both debated or, for many professional historians, openly denied. Partially, this is part of a larger conversation about what constitutes history, with similar debates occurring around popular histories and historical films. As Chapman points out, much of the argument against games as history rests on an assumption of the scholarly book as equivalent to “history.” Chapman, in an extremely elegant intellectual argument, instead proposes that all forms of history (or any form of representation) necessarily must include simplification and reductionism. Thus the complaints of critics about games are not the result of inherent flaws in the medium, but of inherent flaws in representation of history itself, and merely differ in their manifestation from similar flaws inherent in other mediums such as the literary narrative. Chapman’s argument, with its implication of the impossibility of recovering the past (or the truth?) has a strong flavor of post-modernism, but he does raise important issues and provides a better paradigm for the analysis of games as history with a focus on both form as well as content (which at least is more intellectually useful and interesting then playing “gotcha” with anachronisms).

On a deeper level, however, Chapman’s argument merely recasts the old debate about what constitutes history. While games certainly provide some benefit to the historical field by increasing interest among the general population, there are certainly few commercial  games that would meet any wide held definition of a scholarly work. Certainly Civilizations provides no citations to ground its historical-representation decision on primary sources or within the greater historical debate. Additionally, going back to the definition for “serious history” provided by Carl Smith in Week 8, most academics with see an interpretive argument as a critical element of a scholarly work. This is inherently difficult in games, as a large element of the attraction of games is on their interactiveness with users, and the ability to provide non-linearity. This makes it hard to construct and convey an argument. Indeed, as the creators of Pox and the City discovered, a too-tight focus on actual historical facts and events is almost unfeasibly constraining on the construction of appealing game play.

While games may constitute history, broadly defined, they largely lack the attributes of scholarly history, as it is currently recognized. Whether this current definition is valid is a broader question, one that is as hotly debated as the still unrecognized status of games as history, despite the hopes of Adam Chapman


The central tension in crowdsourcing history, apparent in almost all of this week’s readings, is the issue of access and authority. This tension is not limited to digital history, as the pre-digital practice of oral history has long noted the importance of anonymity in collecting people’s stories. While the historians always prefer to know as much as possible about the contributor of their sources in order to facilitate analysis (a interview with a labor union leader will be read and evaluated much differently than that of a factory manager or a average laborer), many contributor to both oral history or digitally collected history are less willing to participate or be fully honest if their name and identifying info is attached to their interview or contribution. If anything this is even more true for digital history, as most internet users have a higher expectation of privacy and anonymity than the participants in an oral history project. To this is added the convenience factor, the more data or effort required to submit a contribution will have a direct effect on the number of visitors willing to contribute. Digital historians, like oral historians, must therefore balance their desire for metadata with their desire for a higher level of contributions.

The question of access goes beyond this debate between data and anonymity, however, as many crowdsourced digital history projects involve the every day user not just in the contribution but also in the editing and writing of history. The most prominent example of this is Wikipedia, but it can also be found in projects such as the Transcribe Bentham project which allows all comers to help transcribe and digitize the works of Jeremey Bentham. Wikipedia especially has been contentious for historians, as it values consensus over expertise and suffers from many faults in coverage, as its content is mainly driven by what people are interested in rather than a balanced coverage of history, and bias, despite its avowal of a neutral point of view. Despite the issues raised by allowing all comers to contribute and edit articles, it is impossible to argue that it has provided a much more massive corpus of openly available work than would be possible with a more rigorous expertise focused approach (see the relative failure of Nupedia detailed by Roy Rosenzweig).

Another issue with Wikipedia, pointed out by Leslie Madsen-Banks, is the unrepresentative nature of its contributors, who are dominated by male, internet-savvy, and English-speaking individuals. Thus, even while Wikipedia’s format and process arguably allow for more varied viewpoints than traditional history, it is a potential that has been stymied by the biases of the contributing population. This reflects a second issue of access: even when completely open access is allowed, it is shaped largely by the project’s (all projects, not just Wikipedia)  ability to attract contributors. This is a critical part of any digital collection project, and requires what Roy Rosenzweig and Dan Cohen have described as “magnet content” to bring contributors to the site and convince them to participate. Perhaps the greatest magnet content is other contributions, but this can be supplemented by direct contact thru email, media coverage and marketing, or social networking. As Shelia A. Brennan and T. Mills Kelly have pointed out, this can also involve outreach to relevant groups in the “analog world” offline, and special attention should be paid to providing avenues for individuals without internet access or skills to contribute (the Hurricane Digital Media Bank used both voicemail and mail in reply cards).

The issue of access and authority will thus overlay many of the decisions made by digital historians seeking to crowdsource history, and indeed this tension is a central theme in debate over almost all aspects of the “democratizing” influence of digital history. However, as access is one of the central benefits offered by the digital humanities, limiting it in the interest of greater academic authority must be a seriously considered decision.

Building an exhibit with Omeka this week demonstrated some of the same patterns that were apparent last week working on Google Map Engine. While I already had data on my computer from a previous research project on the U.S. Army Ordnance Department’s treatment of Breech-loading arms in the Civil War, the most time consuming portion of the entire exercise was going through the hundreds of images and finding the ones that would be useful in telling a story in exhibit form.

The next most time consuming part was entering the metadata for each item, which combined several decisions about how to categorize it with the drudgery of entering it. I found myself wishing there was a way to add metadata to a group of items rather than doing it individually for each one, as I found other than title and description, most other entries remained the same for each item. Working with a set of Ordnance Department records and correspondence from the National Archives (no copyright concerns!), I decided to list “U.S. Army Ordnance Department” as the creator and use the date of each individual record for the date. I listed the record group under source to help users locate the originals if they choose, but failed to go the extra step of listing out the full data I used in my record keeping under “identifier” as it does not conform to a standardized system. For the title of each item I used how I would cite it in a scholarly work, leaving a more detailed description for the captions.

I choose gallery as the layout for my exhibit, as it seemed to make the most sense for a simple exhibit like this. I organized the items chronologically, as it seemed the most reasonable considering the purpose of the exhibit in showing change over time, and the contrast between Ripley and Dyer.

Below is the link for my Omeka exhibit on the Ordnance Department and Breech-loaders during the Civil War:


Ordnance Department and the Civil War




From this week’s readings it seems that the digitization of history has been more openly welcomed by public historians than by their colleagues in academia. While Anne Lindsay notes that many public history institutions have been driven the web out of necessity as visitors increasingly see the web as a first stop for information, it is a necessity that is less powerful in academia, and has driven a fuller engagement with the digital world by public historians and heritage tourism sites. This has had a significant impact on these heritage sites and organizations, as their expansion into the virtual digital world has opened up public history to those who, for financial or other reasons, cannot make it to the physical sites. For those visitors who are restricted by non-financial means, this has also opened up a wider population of potential donors. The creation of virtual tourism however, is not the only role for the digitization of public history, as Lindsey points out that this digital experience must be harmonized with the narrative of the physical site as well, allowing each to reinforce the other and build a connection with visitors. Additionally, and tying back to many of the articles from last weeks reading, is the advantage posed for public history by the scale and accessibility offered by a digital presence. Unlike a physical site, the web is not restricted by space and, as it is easier to revisit than a physical museum, faces much less restriction in time as well.

Another idea, less explicitly stated by Lindsay, is developed more fully by Melissa Terras and Tim Sherratt. This is the role of social media in driving access to specific parts of collections and digitized sources. While some of this is intentional, such as the increasing use of Bots to draw attention to random entries from various collections, much more is the result of individual user decisions and links. This can have somewhat of a skewing effect, as Sherratt points out that the Trove’s visitors spiked due to a link to a specific article from redditt. Even more worrisome are the users that data mine these digital collections for evidence to support an already held opinion, then sharing that data without context.  This type of visit is also fickle, as Terras points out that these spikes in interest are often short lived. Visitors navigating to these sites also rarely engage further with the content available, with Sherrat pointing out that only 3% of visitors linked in from redditt further explored the Trove’s holdings past that one article. However, as Sherratt argues, “3% of a lot is still a lot,” and for a least some people this might have opened up a greater engagement with history. A point left unmade in either of these articles is also perhaps important: surely the original user of these articles and images spent significant time engaging with the site, then sharing it with social media or redditt both expands the exposure of public history and demonstrates the interest it held for that user.