After an abysmal showing in the Spanish-American War, the United States Navy begins a long series of improvements.
Hello everyone and welcome to History of the Second World War Members Episode 19 - The United States Navy Part 1 - Planning for the Future. By the end of the Second World War the United States would possess unquestionably the most powerful navy in the world, by a considerable margin, really stronger than all other existing navies combined. It would of course be a critical tool in the defeat of Japan in the vastness of the Pacific, and then for the next 80 plus years the United State Navy would retain its position as the most powerful maritime force in the world. While this was how the war ended for the United States, it is of course not how it began. The attack on Pearl Harbor and the losses experienced within the ranks of the navy’s battleships, even if most of them were only out of action for a short period, required a bit of rethinking about how the United States Navy could project power. In this episode we are going to start our investigation of the United States Navy during the interwar period by discussing the structures that were put in place that would allow for the navy to transform itself after some very trying experiences during the Spanish-American War. These challenges would bring about institutional changes that would allow for the interwar navy to grow and build up a set of doctrine that would then allow for the massive transformation that would occur after 1941. I think this quote from a 1938 manual entitled Tentative War Instructions and Battle Doctrine, Light Cruisers, is my favorite definition of doctrine and its purpose within any military arm “The purpose of a written battle doctrine is to promote effective coordinated action in battle through mutual understanding . In the absence of instructions , the doctrine should serve as a guide to sound decisions and appropriate actions.” The formulation of that doctrine would be a constant interaction between technical capabilities, usage expectations, and general idea about the best way to use those capabilities to meet those expectations. This then makes the study of doctrine interesting because it should, if it was being properly formulated, be constantly changing. At times these changes are slow, as often happens in peace time due to budgetary constraints and limited technological advancement, but then during wartime it may go into hyper accelerated evolution. For the United States Navy this is exactly what would happen during the Pacific War, but that never meant that the interwar years of change and definition were meaningless, it was the structure put in place, the ideas developed, and the overall mindset of the Navy that would allow it to grow and evolve after 1941. We will end this episode with a brief overview of the Fleet Problems, the series of large exercises that the Navy would use to test ideas, followed by a summary of what the battle instructions were. Next episode we will being to look at the evolution War Plan Orange, the war plan for a war with Japan the evolution of which can tell us a lot about the views that the United States Navy and what it thought was the best way to win the war.
We start our story 4 decades before the Second World War with the Spanish-American War. A full history of that war is not required, but the most important outcome for our purposes is the fact that the Navy found that it was not really capable of waging a modern war. There were logistic problems, ships design problems, a general lack of cohesion among the Navy, and there were also serious personnel problems. This was the first major naval operation for decades, and as might be expected things did not go exactly to plan. In terms of the long term evolution of the navy, the most important outcome of these problems was the decision to create the General Board, which was created via General Order 544 in March 1900. The General Board was a strictly advisory body, but its area of expertise was in preparing the fleet for future operations. Beyond the wide range of topics around war planning and preparations, the General Board would also provide guidance on ship design, tactics, planning, basically all aspects of the Navy and fighting a war. One key part of the General Board, and why it was not called a General Staff, was because the United States at this time the idea of a General Staff was looked at very negatively. There were concerns among civilian leadership that a General Staff, with more real power than the General Board would possess, would compromise the civilian authority and result in European style military power. This was also a point where the overall military power of the United States was purposefully small, with there being little support for military spending beyond the absolute minimum. So, outside of any real power itself the General Board could only exert influence and advise the Secretary of the Navy, who was a civilian and did have actual power.
This was a decent arrangement, and was really required as naval technology and the complexities of waging a naval war were very rapidly on the rise. Over the decades that it would be in place the General Board would make important decisions on topics like ship design, primarily after 1909 when the Board on Construction was dissolved. For a specific example of a decision that the Board would push for, in the Nevada class in 1911 the General Board would push for the ships to be designed around drastic increases in engagement ranges. This required changes to training, but also to ship design and the Nevada class would be the first ships designed around engagement ranges over 10,000 yards, other battleships could fire over that range, but they were always searching for closer engagements. Another important change around this time was the introduction of the Naval Personnel Act, an act that had two purposes: to reform the training and expectation for naval officers, especially around the knowledge and skills that the officers were expected to possess. The most important shift was a much greater emphasis on all Naval officers having more knowledge of their ships and how they worked, along with general technical skills. The second purpose of the Act was to greatly accelerate advancements within the navy to allow for younger officers to be promoted earlier, and for merit to play a larger role in that advancement. This required some changes among the most senior officers, well senior in terms of age, and there would be a list of officers being given the option of early retirement or just being forced to retire. This shift would allow both for new and more skilled officers to advance through the ranks, but it also made the entire structure of naval leadership more professional, due particularly to the greater emphasis on continued learning and technical understanding. In comparison to some other navies around the world some of these changes were long overdue, and they would at least push the US Navy on the paths that the British, Germans, and others were already on.
When discussing technological changes in the years before the First and Second World Wars we often talk in broad terms, so I thought it would be interesting to dive into one specific area of advancement within the United States Navy, revolving around the problems of actually hitting things with the giant guns that were being placed on battleships. It was one thing to push for larger and larger guns to be placed on ships, but as guns became larger, longer, and capable of firing over greater distances there was a rapidly growing problem of actually hitting anything with them. During the Spanish American War at the naval battle of Santiago de Cuba when the firing was done from what was at the time the sizable distance over over 1,000 yards, the accuracy of the American ships was abysmal. This would be a problem for all navies, that they would all have to find ways to solve because the range would increase by tenfold from Santiago at around 1,000 yards to Jutland where firing was done at over 10,000. To try and cope with this problem the US Navy would begin to develop a fire control system that would eventually grow into an incredibly sophisticated system that would utilize technology and coordination to achieve one goal, hitting the target. The first key shift was to centralize control of the guns, something that would be pushed for by Admiral Sims in the lead up to the creation of a special board on gunnery which was created by the Secretary of the Navy in 1905. Sims believed not just in centralized control but also just a real focus on turning the actions of firing the guns into a system that could provide repeatable and dependable results. Centralization was the first step in this shift, with the goal being to take control of firing the guns away from the individual gunners, and place it in a literally higher place. The first step was to move the responsibility of spotting for the guns to a central location as high on the ship as possible, which would provide for better visibility of enemy ships and the fall of shells, in this case higher was better and the spotting masts were far better than in the turrets for this purpose. Then the next step was to move the control of the aiming of the guns to a central location, often within the heart of the ship where it was well protected. From that position the information from the spotters could be mixed with technical information about the guns to produce better results when orders were given to the guns. Once control was centralized, there would be improvement in processes, procedures, and abilities of the team that was in control. Part of this was simply the individual officers who were in the spotting positions and directing fire just getting better at what they were doing, at this stage there was a real benefit to skill and experience to make up for what was missing in technology. But as technology would also begin to improve some of the tasks that were previously done manually moved to those machines. This included a system of plotting, where all information and later computation would come together to track enemy positions, take into account external factors like the weather, and then all of that information could be combined together to determine how to hit the enemy ship. The first major steps towards improving all of this process with technology was to take specific areas of calculation and design machines that could do them faster. This increased ability to do calculations was aided by the constant improvements made to the instrumentation that fed into the calculations, with one example being automatic range finders, which did exactly what you might expect, determined the range to the target. This removed the need for any kind of judgement call by a spotter, which obviously could be in error, and also could be constantly updated with new and accurate information. As I am describing this now, this evolution might seem obvious, and it is how things are done to this day but at the time it was a major shift away from how things had been done before, with each gun controlling its own firing and adjustments.
The next, somewhat logical, step was to centralize not just information gathering and calculations, but also the aiming and the firing of the guns. This shift, often called director firing, put control of the guns in one position on ship, or as would be described by the US Navy in 1916 “Director firing is a method of firing all or part of the guns of a battery , in salvo , from a central point , with one sight ; all the guns fired having been set at a predetermined angle of elevation , measured from a standard ‘ reference ’ plane.” The shift to director fire was rapidly implemented in the American fleet in the years after the First World War. One concern that was raised about having such central control was the simple question of what would happen if that central control space was incapacitated by the enemy, probably by getting hit by a shell. To get around this problem, which was a real and valid concern, a system was put in place that allowed for several redundancies. The fire control center was first of all very well protected within the ship, but the spotting stations were generally far more exposed, because they had to be. In the case of the main spotting tower being damaged the conning tower was equipped to take over. If the plotting room was somehow damaged, then its role could pass to facilities within the upper turrets on each end of the ship. If everything went incredibly bad the last stop was for all responsibility to revert back to the turrets themselves under what was called “local control”. Each of these steps meant a reduction in capabilities, but it always left the guns far from helpless. With this basic system in place iteration and improvements continued, often with the help of increasing technology. One example of this was the introduction of the rangekeeper, which was an analog computer designed for the purpose of solving the math around range, distance, shell travel time, and course of the enemy ship. These calculations could be done by hand, but the rangekeeper was much faster, and it also allowed for more input variables than the previous solution which was called range clocks. The rangekeepers made it much easier to work with range where the range was constantly changing, or at least was inconsistent, which is what the simple range clocks could not do. One of the challenges that these presented is discussed in In Learning War: The Evolution of Fighting Doctrine in the U.S. Navy 1898-1945 where historian Trent Hone points out that by the time we get the rangekeepers, the technology is solving problems complex enough that most naval exercises were not even accurately showing their value. There was some hesitancy within the fleet to use them aboard ship because exercises were so simple that their real strengths were not shown, they just seemed like a hindrance. Another example of the continued evolution of technology was the introduction of the stable vertical indicator. Along with the speed and movement of ships, a major problem when it came to trying to hit a very distant target was the roll of the ship. As distances increased the margin of error on when to fire the guns during the roll became smaller and smaller, at 16,000 yards even being off by a single degree could cause shells to miss. Before the stable vertical was introduced it was often up to the director operator to estimate when the ship was level by using a horizontal wire and the horizon, which was not exactly fool proof. The concept of having a far more accurate measure of when to fire the guns seemed logical, and it would be put in place on the Colorado class battleships which were built starting in 1917. Another major change to the system would take place after the First world War, and would be something completely outside of the ship, with the introduction of aerial spotting which would be tested on the battleship Texas in February 1919. This would be another major advancement in the continuing efforts to increase shell hit percentage, and it would have a very meaningful impact on ship design during the interwar years. There would be a lot of emphasis on making sure there were adequate spotting aircraft on the battleships during this period, and it would be an important part of any major refit or modernization.During the war they would prove to not be as successful as hoped, but this was mostly due to reasons out of the control of the battleships or their aircraft.
After the First World War the interwar navy would begin a series of exercises, which were called fleet problems. Each of these would approach a specific scenario or problem that the Navy’s senior leadership felt needed additional data before decisions could be made. To give an example of what might be tested, there were several different instances were the effectiveness and tactics for aircraft carriers were a major part of the exercises. The one that occurred in 1929 is a great example of the fleet problems because it would be in that year that the opposing fleets would be attempting to attack and defend the Panama canal from an attack by the Japanese. Most of the fleet problems revolved around scenarios involving a war with Japan, but during this exercise the carrier Saratoga separated itself from its associated battleships and swept south to launch an air strike, instead of proceeding as expected with the rest of the fleet. The Saratoga would then be able to launch successful and surprising air strikes on the canal. This is a good representation of the fact that in general there was a lot of freedom given to the staffs on both sides as they decided how to approach the overall scenario. It was a great opportunity to try new things and to generally practice all of the tasks associated with large campaigns involving large fleets over a large geographic area. 21 total problems would be held between 1923 and 1940, with the commanders and their staffs going through full planning sessions for each operations. They would begin with a specific motive and force allocation and then go through the planning with their staffs. There were two major challenges that they would have to work around for each event: the forces of the Navy had to be split and they were not actually trying to damage one another. For the first problem, to try and replicate fleet sizes, there were often phantom or constructed forces that only existed on paper. This was especially crucial in areas where the number of ships of a particular class was very limited, like for battleships or carriers. By allowing for fictional forces to participate new ideas could be tried even when the numbers required were not available, the problem being that any actions or effects of those forces had to also be constructed. This brings us to the second problem, and one that it is an issue for every exercise or wargaming in general, the application of damage. Any application of fictional damage required some guesswork about such crucial pieces as the accuracy of battleship fire, the accuracy of torpedos, the effects of air strikes, and the damage caused by any type of attack. This was particularly challenging around air power because there was simply no historical data available. For the results of say a gun line duel there were First World War era historical information that could be referenced, for air power there was nothing. This resulted in the inevitable arguments between surface officers and aviation officers about how much damage particular air attacks might do. if you try and map back the assumptions they were making with reach events from the Second World War there are a few trends, for example horizontal bombing was generally considered to be far more effective than it actually was. During the Second World War it would prove to be almost useless against naval targets. Other types of bombing, particularly dive bombing was actually pretty accurately estimated. In total the fleet problems would prove to be a mixed bag in term of effectiveness in making estimations about what future combat operations might look like. But they would have a much better track record around training and understanding what large fleet operations required to be successful. For example ,the logistical problems that were inherent with such large operations were clear, along with the sheet organizational lift that would have to be completed anytime large naval operations were planned.
When it came down to a lower level the fleet problems were also important opportunities to test the training and preparation of individual commanders. They would be guided by the tactical instructions provided, which in the United States Navy emphasized the initiative of individual ship commanders. The general guidance was one of aggression in the hopes that this would keep the enemy off balance and prevent them from marshaling an effective response. To pull from the Destroyer Instructions published in 1921 “time must not be wasted in endeavoring to attain the most desirable positions from which to attack . It is essential that . . . destroyers , when the tactical situation requires , attack at once and by the most direct route . . . . An effective destroyer attack early in the action will probably assure success in the major action .” It would also emphasize the importance of maneuver as an important way to maintain the initiative after battle had been joined. This structure was put in place in the hopes of dealing with what was felt to be defects experienced by the Grand Fleet during the First World War. During the conflict the Grand Fleet had very detailed and specific sets of orders, and this was blamed for some of their failures to capitalize on some opportunities. This conclusion is probably debatable, not personally convinced of it, but it was the one that the Navy would run with during the interwar period. There were however some specific plans for certain situations that could guide the fleet on how to approach certain problems. One interesting one was around major fleet confrontations, one in which the battle lines of two navies came into contact. When trying to determine what should be done in such a scenario the US Fleet was confronted with a problem, it was very possible and actually quite likely that the enemy would have faster ships, particularly in the battlecruiser class. Both the Royal Navy and the Imperial Japanese Navy had faster ships, and with the signing of the Washington Navy Treaty there was nothing that could be done to address this problem through new construction. This presented a somewhat simple problem to the fleet: if you imagine two fleets sailing parallel, but then one is slightly faster, it can pull ahead if it wants. As they pull ahead they have complete control of the situation, they could keep sailing straight, they could turn away, they could turn towards, and there was nothing that could be done against any move. To get around this problem the United States Navy planned to do its best to join a battle by sailing in the opposite direction. The hope was that this would completely negate any advantage that the enemy had in speed or light ships. To quote from the 1934 War Instructions “If our fleet can induce the enemy to deploy with the greater part of its light forces or fast detached wing in its van in one direction . . . a deployment by our fleet in the opposite direction should be advantageous . . . . This is because it would place the enemy’s light forces opposite our rear in a position from which they cannot make a successful attack , and a reversal of course by the enemy fleet will not improve the situation for the enemy unless a redistribution of light forces could be made.” Along with the direction of travel of the battle line the distance to maintain from the enemy was seen as critical. There were several versions of the plan based on the engagement ranges, with the most distant type being one for an engagement over 27,000 yards. 27,000 yards was the longest that could be reached by the newest battleship classes, the Colorado and Tennessee, along older ships would be modernized during the 1930s to extend their range. The underlying assumption was that at any range over 21,000 yards the United States battleships would have the advantage over other navies, or at the absolute very least would be equal. As range decreased the danger, to both sides, escalated very quickly. At a certain point the lighter forces of both sides, cruisers and destroyers, would also become a massive risk to the capital ships due primarily to torpedo armaments. This basic structure of a major fleet engagement was still in place in 1941, updated to the new realities of newer ship classes, but the Americans still felt they were inferior when it came to fast ships and therefore still needed to compensate. The fact that there was never a major fleet action like the ones imagined in the planning makes them hard to judge. During this episode we have looked at a generic level at how the US Navy was preparing for a naval war, next time we will look at how they planned to fight a very specific enemy, plans that would be critical after 1941.