This week’s readings bring to the forefront something we, or at least I, have not dealt with yet in our web design. This issue is pretty well summed up in the title of Joe Clark’s article, “How do Disabled People Use the Computer?” From the readings, it is pretty clear there are two main components of this. First, and this is the main focus (aside from his diatribe against political correctness) of Joe Clark’s article, is the set of tools and techniques that disable persons use to access the web, and computers in general. Generally caught under the term of “adaptive devices,” these allow disable persons to bypass some of the structural obstacles to computer use and include such hardware and software as screen readers and alternative keyboards and mouse devices.

More importantly for us as CLIO II students, and the main focus of both Jared Smith and Mark Pilgrim’s articles, is the steps we as web designers can take to make our sites more accessible. I have to admit, for someone such as myself, whose code is not quite fully accessible (or at least understandable) to himself, this is a bit intimidating. I’m still trying to figure out the difference between a class and a div, and now they want me to write my webpage for people who look at it completely differently than I do (I’m still working on that skill when it comes to Mac users, let alone disabled people) using devices such as screen readers that I’ve never even been exposed to.

Luckily, people such as Jared and Mark have put together super useful and helpful guides to how to do so in an accessible (ha!) and un-intimidating (well, mostly) how to. Finally, I though Mark’s use of hypothetical disable users was particularly poignent in illustrating why going through these extra-steps is so important, even after I spent eight hours just trying to get the website to look decent to people without any additional obstacles.

 

Comment on Jordan’s Post: “Accessibility and the Web”

This week’s readings were almost entirely tutorial in nature, and complemented the practical exercise we worked on in class last week. While last week’s readings dealt with some of the ethical issues surrounding the alteration of photos, this week’s tutorials accepted as given that in today’s Photoshop world we will be changing aspects of images to fit our design goals, and focused on going over some best practices for doing so.

Taken together, the two Lynda classes and the four blog entries (well, three blog entries and a list of link’s to other like-minded blogs) by Carmen Moll covered three of the main image techniques: colorizing/recolorizing, restoration, and ageing. So far I have only really worked a little bit with recoloring in our class exercise, and found it involved way more steps than I expected. For some reason I had an idea that recoloring just involved identifying one know color in an image, and then running a computer program that based on that would figure out and color in all the others (ie in a picture of a Civil War soldier I know his uniform in navy blue, so set that and let the computer do all the rest). Turns out its way harder than that.

Still, this week’s readings, combined with our in class practice, have given me a good tool kit of techniques for working with Photoshop to colorize, restore, or age images. I’m a long way from being an expert on image restoration, but at least now I have a basic understanding, and some sources to look back to when I run into problems. Perhaps the biggest point I’ve taken away however, is just spending the time to try different things until it looks the way you want (sounds similar to my experience with learning HTML/CSS).

For my final project I worked on mapping Army posts in post-Civil War America. This project stemmed from my wider research interests, and I hope to use the insights gained in this project to further my dissertation research moving forward. For my dissertation I am looking at the professionalization of the Army in the last half of the nineteenth century. One of the factors often cited as part of this professionalization is the consolidation of units onto a smaller number of larger bases, allowing for both economies of scale in fatigue and guard details and the ability to train soldiers to operate as larger units. With this in mind, I wanted to create a map to help visualize the shift both in number of posts and the greater troop strength at each.

Like all projects, this one first had to start with sources. For this project to be effective I had to have a pretty complete list of Army posts as well as their assigned strengths and locations. Additionally, I would have to have this same information over the course of several decades in order to show change over time. While I did not have an existing source base at the outset of the project, I had a pretty good idea of where I could find the information from previous research. On an earlier research paper I worked with the “Annual Report of the Chief of Ordnance”, and I knew that this report, along with those from all of the other departments, was compiled annually and submitted as appendices to the “Annual Report of the Secretary of War to Congress”. A quick google search found the one year (1877) that had been digitalized by Google Books. Contained within the appendices was the report of the Adjutant General, which included a table listing all Army posts by name, location, commanding officer, units assigned, and strength (broken down in more detail than I needed for the parameters of this project). Unfortunately, only one year of the Report has been digitalized, so I had to make a trip to the Army Heritage Center at Carlisle Barracks to get the data for the other years. However, since I was already able to look at the 1877 report, I showed up knowing exactly what I was looking for and exactly where to find it, so it took me only a half day to pull the data from 30 years of Annual Reports.

DSCN0619

DSCN0620

Once I had the data, the next step was putting it in a format that would be compatible with the mapping program. My initial plan was to do a map using Google Map Engine, as it has by far the easiest user interface. I intended to do a layer for each year, and size the point markers by the number of troops at that post. However, there is no real good way to make the size of the marker correspond with troop strength, and I would have had to create each point manually, one at a time. For these reasons, I decided to go with Palladio instead. Palladio is a little less user friendly, and is still in development and thus suffers from the occasional bug or idiosyncrasy. However, it does allow you to automatically have your points sized by either frequency or an independent value (troop strength in my case), and you can upload all of your points from a spreadsheet rather than creating them one by one.

While I thus didn’t have to individually create each point, in order for them to populate to Palladio’s map I needed my data to include coordinates for each post. The table in the Annual Report did list each Fort’s location, but only by textual description. Some of these were pretty straight forward, such as “6 miles from Omaha,” while others were more obscure such as “at the fork of the Concho river” or relatively vague such as “in the Owen River Valley.” Obviously this was not going to work for Palladio, so I had to do some additional research first. While time consuming, this was relatively easy. Many of these Forts have become modern towns, and thus show up in a quick Google Maps search (right clicking on the map and choosing “what’s here” will give you the latitude/longitude of any point on Google Maps). I was also pleasantly surprised at the coverage of these Forts in Wikipedia; almost all of these Forts, even those that were only active for a few years and never held more than 100 people, have their own Wikipedia entries. And almost all of those entries contain coordinates, and of those that didn’t most at least included a reference to location tied to a modern town allowing me to Google Map them.

Now armed with coordinates, I built a spreadsheet combining all the data needed for upload to Palladio, specifically Place Name (the title of each post), Coordinates, Strength, and Date (really year, but Palladio requires yyyy-mm-dd format so I used the date of the Annual Report). I did tabs for each year, with a consolidated tab with all years for upload. For the years I chose a representative sampling. While at the archives I pulled the data for all years between 1866 (end of the Civil War) and 1896. However, I quickly decided to start my mapping project in 1876, which marked both the end of Reconstruction (with troops posted throughout the occupied South) and the height of the Indian Wars (the Battle of Little Big Horn, where Custer’s command was wiped out, occurred on June 25/26, 1876). As I began data entry I also realized how time-consuming including every year even during this reduced period would be, and so reduced my table to a sampling of years. I included every 5 years (1876, 1880, 1885, 1890, 1895) for periodic references and then chose several sets of paired years tied to major events. 1876/1877 show the end of Reconstruction, 1880/1881 mark the initial Congressional approval of the Army’s proposal to reduce the number of bases, and 1892/1893 mark what is generally considered the end of the Indian Wars (although clearly no one knew this at the time).

It was at this point in the project that I ran into my first problem. Stupidly, I did not look closely at Palladio before beginning to build my spreadsheet. My coordinates were in degrees, minutes, seconds format (the format that Wikipedia uses) while Palladio requires coordinates to be in decimal format. Rather than back through and manually reenter each coordinate (Google maps gives you both, if you click on the coordinates in Wikipedia it will take you to geohacks, which allows you to see the point using any number of maps, including Google Maps), I decided to attempt to convert them all in excel using a formula. This was harder than I expected at first, and I had to fight through that for a bit. This involved first breaking up each coordinate into individual latitude and longitude columns, putting them in an hh:mm:ss format, multiplying them by 24, and then recombining them into a single column. The end result was that I (finally) had my coordinates in a Palladio-friendly format (although I am still not sure I wouldn’t have been able to do it faster by just reentering them manually).

I now had everything I needed to run the data through Palladio. Here I ran into my second problem. I copied and pasted my data into Palladio, and it showed up perfectly fine on the Data screen. However, when I went to the Map, no points appeared. I tried re-uploading several times, attempted in in three different browsers, and even tried it on a Mac (I’m a PC user as most government systems only work on Windows). Nothing worked. I went back through Palladio’s FAQs and guide for mapping, and I was doing everything right. The points should have been there. Just when I was ready to throw my laptop into the wall, I noticed a single point at the far right of the map (in the Mediterranean). I scrolled the map over, and there were all of my nineteenth-century American army posts….spread across the Middle East and Asia. After a moment of panic that all of my data was clearly wrong, I realized that when I had converted the data into decimal format the E/W had been dropped and all of my Longitudes were positive values when they should have been negatives. After quickly fixing the spreadsheet and re-uploading, all of my forts showed up where they were supposed to be (except Fort Hamilton, which was hanging out off the coast of Spain because I missed a digit in its coordinates…quickly fixed).

The mapping function of Palladio gave a pretty good visualization of the shirinking number of post and the increased size of individual garrisons, but I had to play around with how I used it to get the most value out of it. I had a vision in my head of pressing a play button and watching my map change as posts disappeared and the remaining ones grew larger and larger, but Palladio does not have that level of functionality. The mapping visualization and the change over time visualization are largely divorced from each other; the map shows static data while the Timeline feature allows you to see how various elements of your data changed over time. Hopefully future versions of Palladio will add the ability to have the map show change over time, I can’t imagine my project is the only one that would be useful for.

After using different combinations of data and functions, I decided that the best visualization of my data for my purposes was to actually make use of three separate maps/data sets:

For the first map I included all data for 1876-1896, and sized the points by frequency of appearance of place name. This essentially creates larger dots for the Forts that stayed active longer, with smaller dots for Forts that closed. I also used the Timeline feature, showing how the # of forts decreased over time.

Map1a

Map1b

The two other maps are single year maps, one for my starting year of 1876 and one for the final year of 1896. In these maps of have the points sized according to “Strength” column, so that the size of each Fort is represented on the map based on how many soldiers were assigned there. Looked at side by side these two maps give a pretty clear visualization of how the number of Forts decreased while their garrisons increased.

1876:

Map2

1896:

Map3

So despite the lack of a timelapse map view, I was able to get out of Palladio essentially what I hoped to achieve. Seeing the data visually, in both the Map and Timeline functions, gives a much more intuitive and emphatic picture of the change in the frontier army over the last part of the nineteenth century. From my start point in 1876 to my end point in 1896 the Army went from 163 posts to 78 in 1896. The size of each post also drastically changed. In 1876 there were only 2 posts with over 500 soldiers and there were 96 (more posts than existed in 1896!) with under 100 soldiers. By 1896, 20 Forts, over 25% of active posts, held over 500 soldiers and only 6 held less than 100 soldiers.

The project also revealed how fluid these posts were, and revealed patterns related to the larger events of the era. Forts would appear, disappear and reappear in response to the campaigns against the Indians or other threats. In 1876 a number of posts still existed throughout the South, enforcing Reconstruction. In 1896 there were additional temporary posts, many containing several hundred troops, in the Northeast in response to labor unrest.

In addition to helping me visualize the changing nature of the Army in the late nineteenth century, I also learned quite a bit about the process of digital history, both in the specifics of Palladio and more generally. First is how much of your time will be spent in the simple drudgery of data entry. I probably spent 80% of my time building my spreadsheet of data and formatting it to be compatible with Palladio; my actual interaction with the digital tools was by far the smallest part of the project. Of the time I spent working with Palladio, probably fully a half was fighting through how to get it to do what I wanted, which left only 10% of my time to actually exploring what Palladio showed me about my data. Still, I was able to get a final product that helped me visualize the data, and even if the results were unsurprising, the visual certainly conveyed it with more emphasis than possible just from looking at the numbers on my spreadsheet. Now I can just hope that the next iteration of Palladio includes a timelapse function for the map.

This weeks readings brought in an issue that touches on much of what we’ve dealt with this semester but that we have not yet addressed. For all the talk of digital media “democratizing” history, the expanded access that pushes this democratization requires the navigation of copyright law. While print historians are largely familiar with the concept of “fair use” without, in many cases, even being familiar with the term, digitization brings in additional concerns, as it does in many other areas. The ability of digital media to rapidly include vast amounts of documents and other media that creates much more copyright concerns than that dealt with by print historians.

The most interesting aspect of this weeks readings, for me, was the section of the embargoing of dissertations. Trevor Owens and Rebecca Anne Goetz raise some very good points about the issues of the AHA statement, and the general benefits of open access. Their articles were especially convincing on the larger issue of tying academic standing to the requirements of for profit press and limiting the focus of academia to scholarly monographs. Despite the strong arguments of Owens and Goetz, and the fact that I think the benefits of embargo vs. open access are still very undecided, it should be noted that the AHA statement does not disavow or call for the elimination of open access, it simply calls for the decision of embargo or open access to be made by the dissertation’s author rather than a unilateral decision by the university. This seems like a rather reasonable request, one that it is hard to argue against even if you believe in open access.

Finally, I thought the Creative Commons Licenses were an intriguing compromise between copyright and open access, although it would be interesting to see some evaluations of its usefulness outside of its own commercial website.

I felt that this weeks’ discussion did a pretty good job of covering the readings and drawing connections between them. Our question were relatively effective in steering the conversation and engaging the class in the major themes of the issue of digital scholarship, although some questions were more effective than others in driving an opened ended conversation than others and, unsurprisingly based on previous class discussions, we answered several questions before we got to them based off of where the discussion went from earlier questions.

Several key issues emerged, some of them new to this week but many echoing themes we’ve dealt with less explicitly earlier in the semester. One of the central topics that threaded throughout the discussion was what make “digital scholarship” and how it relates to the more traditional markers of scholarship as manifested in books, and whether those markers and values are still valid in a digital age. Closely related to this is the issue that digital scholarship is facing in being accepted within the academy, especially within hiring and tenure decisions. The consensus, unfortunately, is that what digital scholarship is capable of doing and what the academy wants don’t mesh well. It does seem that digital history is slowly gaining ground and more jobs, although outside of specifically digital positions the rest of the academy is largely unaware and unconcerned with digital scholarship, even at institutions (like George Mason) with strong digital programs.

While much of the class discussion covered topic I had already thought of when I was preparing for class, several issues were raised that I hadn’t considered. Most notably was the point, drawn from Melissa Terras’ article, of whether historians need to pay attention to how others are using their work…and whether that should drive our future research. While I don’t think we should blindly follow our audience, certainly their is likely some value in paying attention to how our readers interact with our work and if that raises any questions we hadn’t considered. An additional important point, one that is so simple that it is easy to overlook, is the fact that open review (and other digital tools) are such a change from traditional tools that historians struggle to even understand how to use them.

I think the overall discussion was useful in exploring the issue identified above, and provide a good foundation for thinking about digital scholarship. I think in retrospect this discussion of the abstract issues of digital history might have been useful earlier in the semester, but having the practicums out of the way early to allow us to work on our final projects is probably the best structure for the course.

Despite Adam Chapman’s hope that “by no few deny that contemporary game series like Civilization or Assassin’s Creed constitute history,” the validity of games (even less commercial ones such as Pox and the City) as history, and specifically historical scholarship, remains both debated or, for many professional historians, openly denied. Partially, this is part of a larger conversation about what constitutes history, with similar debates occurring around popular histories and historical films. As Chapman points out, much of the argument against games as history rests on an assumption of the scholarly book as equivalent to “history.” Chapman, in an extremely elegant intellectual argument, instead proposes that all forms of history (or any form of representation) necessarily must include simplification and reductionism. Thus the complaints of critics about games are not the result of inherent flaws in the medium, but of inherent flaws in representation of history itself, and merely differ in their manifestation from similar flaws inherent in other mediums such as the literary narrative. Chapman’s argument, with its implication of the impossibility of recovering the past (or the truth?) has a strong flavor of post-modernism, but he does raise important issues and provides a better paradigm for the analysis of games as history with a focus on both form as well as content (which at least is more intellectually useful and interesting then playing “gotcha” with anachronisms).

On a deeper level, however, Chapman’s argument merely recasts the old debate about what constitutes history. While games certainly provide some benefit to the historical field by increasing interest among the general population, there are certainly few commercial  games that would meet any wide held definition of a scholarly work. Certainly Civilizations provides no citations to ground its historical-representation decision on primary sources or within the greater historical debate. Additionally, going back to the definition for “serious history” provided by Carl Smith in Week 8, most academics with see an interpretive argument as a critical element of a scholarly work. This is inherently difficult in games, as a large element of the attraction of games is on their interactiveness with users, and the ability to provide non-linearity. This makes it hard to construct and convey an argument. Indeed, as the creators of Pox and the City discovered, a too-tight focus on actual historical facts and events is almost unfeasibly constraining on the construction of appealing game play.

While games may constitute history, broadly defined, they largely lack the attributes of scholarly history, as it is currently recognized. Whether this current definition is valid is a broader question, one that is as hotly debated as the still unrecognized status of games as history, despite the hopes of Adam Chapman

 

From this week’s readings it seems that the digitization of history has been more openly welcomed by public historians than by their colleagues in academia. While Anne Lindsay notes that many public history institutions have been driven the web out of necessity as visitors increasingly see the web as a first stop for information, it is a necessity that is less powerful in academia, and has driven a fuller engagement with the digital world by public historians and heritage tourism sites. This has had a significant impact on these heritage sites and organizations, as their expansion into the virtual digital world has opened up public history to those who, for financial or other reasons, cannot make it to the physical sites. For those visitors who are restricted by non-financial means, this has also opened up a wider population of potential donors. The creation of virtual tourism however, is not the only role for the digitization of public history, as Lindsey points out that this digital experience must be harmonized with the narrative of the physical site as well, allowing each to reinforce the other and build a connection with visitors. Additionally, and tying back to many of the articles from last weeks reading, is the advantage posed for public history by the scale and accessibility offered by a digital presence. Unlike a physical site, the web is not restricted by space and, as it is easier to revisit than a physical museum, faces much less restriction in time as well.

Another idea, less explicitly stated by Lindsay, is developed more fully by Melissa Terras and Tim Sherratt. This is the role of social media in driving access to specific parts of collections and digitized sources. While some of this is intentional, such as the increasing use of Bots to draw attention to random entries from various collections, much more is the result of individual user decisions and links. This can have somewhat of a skewing effect, as Sherratt points out that the Trove’s visitors spiked due to a link to a specific article from redditt. Even more worrisome are the users that data mine these digital collections for evidence to support an already held opinion, then sharing that data without context.  This type of visit is also fickle, as Terras points out that these spikes in interest are often short lived. Visitors navigating to these sites also rarely engage further with the content available, with Sherrat pointing out that only 3% of visitors linked in from redditt further explored the Trove’s holdings past that one article. However, as Sherratt argues, “3% of a lot is still a lot,” and for a least some people this might have opened up a greater engagement with history. A point left unmade in either of these articles is also perhaps important: surely the original user of these articles and images spent significant time engaging with the site, then sharing it with social media or redditt both expands the exposure of public history and demonstrates the interest it held for that user.

For this weeks practicum I created a map using Google Map Engine showing the campaigns of the 1st Michigan Cavalry Regiment. The main take away, for me, was the sheer level of drudgery involved in a project like this. Using as my base data the service record of the regiment provided on the National Parks Service site, I input every service entry for the regiment from January 1864 through their mustering out in 1866. Despite the user friendly interface of Google Map Engine, this was a lot of work for two reasons. First, without an importable csv. there was the pure data entry of putting in each of the events. In addition, the data was not “clean” and I had to spend a substantial amount of time trying to find locations for each of these events, as not all came up in a simple google maps search. I had some success finding stuff using google and wikipedia (wikipedia was especially useful as the entries for several, but not all, of the battles had lat/long data that could be easily posted into the map engien), but even my end result is still only a partial solution, as finding the exact location of each of these events would require weeks of research.

For my map, I created layers for each year that can be displayed together or independently. Generally speaking, I marked each battle with a flag icon, each non-battle military event (“expedition,” “reconnaissance,” “demonstration,” etc.) with a horse icon, and occupations or encampments with a house symbol. The Grand Review got its own icon of two men walking in step (intended as hikers, but it worked ok for my purpose). For this classification, I used the assumption that anything not labeled otherwise was a battle. Movements with specified start and end points are shown as lines (such as the “Movement to Fort Leavenworth”), and I included two polygons for the Sheridan’s Shenandoah Campaign and the Expedition into Loudon and Faquier Counties to show the rough area of operations. Each icon is named using the title used by the NPS, and the description containing the date of the operation. This allows viewers to toggle back and forth between which is displayed using the label dropdown menu.

 

 

Two aspects of this week’s readings that stood out as particularly illuminating for the possibilities provided by digital history were the concept of an “interactive scholarly work” and the importance of scale. The idea of an interactive scholarly work has been inherent is some of the other tools we’ve looked at, but it is especially salient in mapping projects. An interactive scholarly work is more than just a static display of visual information, but rather allows users to interact and develop their own research agenda. In some cases this can produce citable evidence, but like many other digital tools this is often best used to raise questions for further research or exploration

The projects we looked at in this week’s readings, “Visualizing Emancipation,” “ORBIS,” and “Digital Harlem,” all allow the user to interact and display various data and connections, using layered searches to display relationships that would be difficult pick out through traditional means or the spatialization provided by map images. The better interactive scholarly works also tie their digital presentation closely to rigorous scholarly research. ORBIS provides all of its background data and sources, making it “not just a site, but also an online scholarly presentation” according to Scott Dunn. Digital Harlem supplements its interactive map with Blog post that explore various connections and ideas that the map reveals. This, combined with several longer articles published in connection to the program, allows Digital Harlem to “bridge the gap between digital and more traditional research” according to Nicholas Grant.

Another thread running through this week’s readings is the ability of these digital mapping projects to convey scale in a way impossible to do in print media. Edward Ayers and Scott Nesbit discuss this in connection with the concept of “deep contingency” where different aspects of social life interacted in unpredictable ways across the various different scales (local, regional, national, military, etc.) to effect individual actions and decisions.

Digital Harlem also deals closely with the effects of scale, as by mapping black life (and white presence) using Real Estate maps at a scale well beyond that typically described in text reveals and changes the way Harlem looks. Working at this smaller scale and including ALL evidence available provides a deeper and different picture. This picture is inherently digital, as it occurs at a scale that would be impossible to convey in print, and which can only be fully explored interactively with the ability to zoom in and out.

Interactivity and scale, therefore, are essential to the essence of digital mapping projects. The data available, both in amount and complexity, make it impossible to display them statically. Their full potential can only be unlocked through the digital medium and through an interactive user interface. However, tying this digitized and democratized history back to its scholarly background is key to both establish the credibility of these tools and their use for further research. The best digital mapping projects, including those we looked at this week, therefore, represent both interactive websites and online scholarly publications.

 

The most interesting aspect, for me, of this week’s readings is the negative side of the digitalization of sources. To some extent, this negative side is a reflection of the treatment of digitalized sources as the equivalent of their analog predecessors. As several of the articles we read point out, many digital scholars consider the medium to be intrinsic to the meaning and interpretation of digital records just as much as physical materials. Based on this, according to Marlene Manoff “if print and electronic versions are different objects, we should not treat them as if they are interchangeable.”

Unfortunately, that is how many people see them, with detrimental results. Manoff especially decries the tendency of libraries to view these different versions as interchangeable, resulting in such practices as the elimination of print holdings (especially of periodicals) once digital versions are available, as well as the cancelation of current print subscriptions.

This view of digital and print as interchangeable, in addition to the rather abstract concepts of the materiality of digital collections, has more concrete results on scholarship in the digital age. Specifically, as pointed out by Ian Milligan (as well as in our class discussion last week), scholars are increasingly tending to access journals and other sources through online databases while continuing to cite the hard copies. Beyond simply representing a misleading practice, this practice conceals a reliance on databases and key word searches that may miss key sources. As Milligan points out using the example of the artistic woodwork strike of 1973, key word searches can overlook some entries due to the lack of 100% accuracy in OCR renditions of print sources, resulting in “false negatives.”

More broadly, the increasing prevalence of key word searches, while vastly opening up the available sources and the scope of research available to the average scholar, has to a large, and mostly unremarked extent, sacrificed context. This sacrifice is further concealed by the continuing prevalence of citing sources as if the hard copy was consulted. By navigating via key word only to those article directly dealing with the topic at hand rather than combing through an entire date range of coverage, there is much less of an ability to get a feeling of the context of the times and perhaps critical tangential coverage (not to mention the occurrence of “archival serendipity”). This almost by definition makes it more difficult for the historian to interpret the vastly more numerous but much more narrowly selected sources made available by digitalization and key word search ability.

While is unlikely that even the most tradition bound historian would choose to completely ignore the vastly increased research capability provided by digitalization, it also seems incumbent upon the profession to mitigate the negative consequences of the primacy of digital searches, many of which seem to be little acknowledged today (certainly no historian to my knowledge has yet been called to the carpet for citing hard copies after reading the article on JSTOR).