The Unico Bearington Manufacturing Plant Finance Essay

Published: November 26, 2015 Words: 4254

A few months ago, the UniCo Bearington Manufacturing Plant faced the possibility of being shut down. The plant was neither profitable nor productive ten spite the recent installation of automation systems (increasing local efficiencies by large amounts) and ten spite of continuous cost cutting and production of great amounts of inventory (indicating excess capacity being used for making parts that were not contributing to orders). Orders were continuously and increasingly late in their deliveries and the plant was looking at being shut down.

My team and I were able to turn this situation around using a very different approach and process of thinking of production than the one generally taught and accepted. Actions were taken to progress towards three operational goals: To continuously increase sales throughput, whilst simultaneously decreasing inventory and operational expense. In essence, focus was shifted from conventional cost reporting and large local efficiencies towards the simple goal of the company: to make money. Any action taken was aligned with this goal.

All the management actions taken are discussed in this report, and can be summarized in 5 steps. This is an on-going process of improvement, and the steps were recycled and reapplied several times, and the process will continue to be implemented. The steps are as follows:

Identify the system's constraints, in other words, identify what is it that needs to be changed/improved that is inhibiting the plant from moving towards the goal: Making money.

Decide how to exploit the system's constraint(s): How to get the most out of the constraint. The question should be 'What to change to?'

Subordinate everything else to above decision (align the whole system or organization to support the decision made above). How to change and how to overcome emotional resistance to change are the key questions to ask at this stage.

Elevate the system's constraint(s) (make other major changes needed to break the constraint)

If in the previous steps a constraint has been broken, go back to step 1, but do not allow inertia to cause a system's constraint.

In and of itself, these steps might seem fairly obvious, but the report will put them in context so that the very large implications of certain realizations and subsequent actions can be understood. In the process, it was necessary to slaughter some sacred cows, such as thoughts about worker efficiencies, optimum batch sizes, and conventional cost accounting methods.

As a result of implementing certain solutions/processes we were able to increase the bottom line by 20%, decrease inventory enormously, and capture more than 20% more sales with increase lead times. There was a great increase in overall plant productivity. All in all, we were able to make the plant profitable and productive.

Introduction

The Bearington UniCo production plant, of which I am the plant manager, is a factory that produces machined assemblies that are sold to large end-user customers directly as spare parts assemblies or are furnished to other plants in the UniWare divisions as components of end-items. The factory is staffed by competent, well-trained employees and had recently introduced automation at several levels that greatly increased production efficiencies and resulting savings in station production rate. UniCo's management had become increasingly adept at reducing costs in operational areas in order to control prices. Financial performance reporting is quite thoroughly done at every level of production in order to produce functional cost budgets that can be managed with precision.

However, a few months ago, the plant was neither a productive nor a profitable plant. There were orders bordering on two months behind the scheduled delivery date. There was an excess of $20million in warehouse inventory - enormous amounts of finished goods stored in the local warehouse. Deliverable items were completed too often with a reliance on expediting and overtime. Sales were steadily decreasing due to delivery slippage and materials costs were skyrocketing. As an accumulation of these symptoms, the division faced a cash shortage and subsequent threat of being shut down. I was politely informed that there was a deadline of 3months - until the end of the fiscal year- to turn the terminally unprofitable trends around and save the plant from being sold to the highest bidder. It was clear that, in order to succeed, several drastic changes needed to be made. And so they were.

At present, the Bearington plant is one of the most successful UniCo plants, having been able to greatly increase sales and profitability. This report will serve to summarize the reasons that this was able to be achieved as well as the generic lessons that can be passed on to managers of other plants within the group.

Discussion

In order to save the Bearington plant, I had to challenge the accepted assumptions and measurements that have been misleading managers and business leaders for decades. I had to learn and implement a new way of thinking that - although seemingly wrong in terms of the 'old system' - brought the plant, and thus the company, more real success, productivity and profitability than ever before. I will attempt to be succinct in reporting my findings as well as giving some insight into the deductive process that my team and I had to go through in realising these lessons.

Productivity

A significant focus has been placed on high efficiencies and local costs at UniCo. Having accepted many things without questioning the simple logic behind their purpose and application, a 36% efficiency increase in one of our divisions (due to the firm's investment in automation) was synonymous in my mind to increased plant productivity. The recent local efficiency increases, however, had not had an effect on any of the fundamental productivity identifiers. There was no reduction in expenses (no employees had been laid off), no decrease in inventory levels (in fact, inventory increased, and thus operational expense, increased) and no sign of increased sales numbers. It was then that I realized that the very definition of productivity and what it means to a business had escaped me. Productivity is defined as accomplishing something in terms of set goals. A productive act/decision/process is thus one that moves a company/plant towards a specific goal, and the goal of the plant is to make money. More specifically, to increase net profit whilst simultaneously increasing cash flow and return on investment.

Operationally, the focus needs to be on three things: The plant must increase throughput, while simultaneously reducing inventory and operational expense. Everything in the plant can be classified under these three terms. Throughput is the rate at which the system generates money. Inventory is all the money the system has invested in purchasing things which it intends to sell. Operational expense is all the money the system spends in order to turn inventory into throughput. Tooling, buildings, machines, and really the whole physical plant, are all classified as inventory. The entire plant is an investment that can be sold. In principle, investment is the same as inventory.

In the process of identifying how the abovementioned three measurements would apply to my particular situation in the plant, it was realised that the source of the very large inventory was the result of overproduction of superfluous parts in an attempt to keep the efficiency metrics up. Instead of producing the parts necessary for sales, the plant was generating every single part that each stage could handle, creating excess inventory. Capacity was therefore very often unavailable to produce the needed parts, which were necessary for filling orders on time. The previously learned 'rules of productivity' had clouded my ability to see this occurring. To move closer to the goal, it was necessary to let go of the old attachments to local optimums and conventional cost accounting reports.

The balanced plant

It is also important to realize that a plant in which every employee is working all the time as a result of optimizing local efficiency does not necessarily equate to productivity. In fact, it indicates the use of excess manpower/capacity, being used in creating excess inventory, both of which result in increased operating expenses in the form of inventory carrying costs.

A balanced plant is one in which the capacity of every resource is exactly balanced to the market demand. However, trimming capacity/manpower in order to reduce operating expenses, as is commonly done, only decreases one of the three measurements. It might be argued that this is a beneficial step, as the other two measurements, inventory and throughput, remain the same whilst expenses are reduced, but this is not the case. Mathematical proof exists that when capacity is trimmed to marketing demands, throughput goes down and inventory skyrockets. The carrying cost of inventory goes up and this increase tends to offset the savings presented by the original attempt to lower operational costs through labour reductions. With decreased throughput, you will decrease demand as a result of the inability to deliver. If you continue to trim capacity to demand, demand continues to drop, carrying costs go up, and eventually you have no more market left for an enormous amount of inventory. It can be concluded that the closer you get to a balanced plant, the closer you are to bankruptcy.

Dependant events and statistical fluctuations

There are two phenomena that cause this downward spiralling effect- dependant events and statistical fluctuations. Statistical fluctuations are the result of certain types of predictive information that cannot be determined exactly, and influence prediction of things such as market demand estimates, error percentages, and productivity measurements. Dependant events are a result of subsequent operations depending on the completion of the one's prior to it. Viewed in isolation, statistical fluctuations are expected to balance/average themselves out over time, but in combination with dependant events, the effects in fact accumulate over time. This is due to the fact that dependant events limit the opportunity for gain fluctuations, but allow for an unlimited amount of loss. It became clear that the flow of product through the entire plant needed to be balanced with the market demand, and not the capacities of local operations.

In my plant, these observations were confirmed by carrying out a production capacity test as follows:

100 parts were required by the end of the day( 12pm-5pm : 5 hours)

5 hours 12pm-5pm were available

Parts required 2 operations, fabrication then robot welding

Each of these departments averaged 25units/hour

Fabrication was started at 12p, and parts transferred on the hour

The expectancy was as follows, where all 100parts are finished:

But in reality, dependent events (welding occurs after fabrication) along with statistical fluctuations resulted in only 90 parts having been completed.

In a production process, consisting of several stages/operations with different capacities and dependency on the completion of a previous operation, the obvious solution would be to arrange the operations from lowest to highest capacity. Statistical fluctuations decreasing local productivity of the first stage will be made up for in the next stage. Decrease in both first and second stages will be made of for in the third, and so on. Having more capacity downstream is therefore a viable solution. However, in my plant, rearrangement of production steps was not a possibility.

Bottlenecks

Therefore, I turned my focus to finding the constraint which had a capacity less than or equal to the demand placed upon it, and thus restricting the flow of product through the entire plant. These types of constraints were termed bottlenecks, and will be referred to as such throughout this report. Resources whose capacity is greater than the demand placed upon it were termed non-bottlenecks. If bottleneck capacity is kept equal to demand, and demand drops, costs will go up resulting in a loss of money. The objective then is to maintain bottleneck capacity slightly lower than market demand.

My team started to search for the plants' bottlenecks, and after days of analysing several thousand pages of output data, we started looking for a simpler approach. Finally, it was realized that the bottlenecks should be identifiable by the backlog of work-in-progress in front of them. In my plant, the NCX-10 and the heat treat furnace were identified, due to the piles of inventory sitting in front of these operations.

The NCX-10, a robotic multi-purpose automation machine, came as a surprise to me, as it was supposed to have a major production efficiency increase over the original manual processes. Specifically, the previous machines' process times resulted in a total cycle time of 16minutes and 10 machinists (1 at each station). The new NCX-10 robot can process the same item in 10 minutes using 2 operators. Less time per part and fewer operators should have resulted in lower costs and higher efficiency, but there is a 6 month lead time to train a NCX-10 operator due to specialty position requirements and trained operators are leaving the company faster than it can re-train replacements. The heat treat furnace was run at partial loads due to expediting. Neither one of the bottlenecks was running at full capacity.

As has been discussed, the bottlenecks determine the flow of product through the plant. Management would not have been willing to accept a request for investment in even greater capacity for a plant that wasn't making money, and so my focus was on the improvement of bottleneck capacity via other means. The first step taken was to put quality control in front of the heat treat bottleneck. Rejected material before the bottleneck operation ensured that valuable processing time was not wasted on defective parts. Process controls at a bottleneck was redesigned to minimize system impact-costs through rework, ensuring no defect-based processing.

The following further steps were then taken, again to increase the bottlenecks' capacity:

Old equipment was reinstalled to run parallel to the NCX-10

Dedicated personnel were assigned to the NCX-10 and heat treat, even though they were unoccupied quite often, this ensured that the bottleneck was never idle. This did not require more personnel, but merely moving people from non-bottlenecks to bottlenecks.

A portion of heat treat parts were sent to a vendor. The expense of this is made up in the fact that increased bottlenecks parts will equate to increased filling of orders and thus increased sales.

Batches loaded into the heat treat furnace were made to consist of a combination of parts so as to fill up the furnaces instead of having them complete the heating process of specific parts half empty.(Increased efficiency by 10%)

Discussions with the engineers revealed that some parts could be processed differently and omitted from the heat treat entirely.

The tag system

In an attempt to fill the late order in a certain sequence according to due date, a schedule/list of which components to treat at the bottlenecks were given to the foremen. This did not work, as the parts for the jobs were not always available, in which case the foremen either waited for them (leaving the bottleneck idle) or continued with other available parts, in which case the schedule itself was a useless tool. To rectify this, a tag system was put in place so that non-bottleneck operators could distinguish and identify parts according to colour. Red tags were put on bottleneck parts and green on all others. Red tags were at all times given priority. A run processing green parts that took longer than 30minutes to complete had to be stopped for a red tag job. If more than one red (or green tag) batch was waiting, then foremen processed the job with the lower number on tag.

As a result of all the above mentioned changes, WIP Inventories were reduced by 12% and a new customer order shipping record was achieved, shipping 57 orders of $3million.

Material release system

At this stage, however, the tag system had caused a shortage of non-bottleneck/green parts. They were not reaching assembly on time even though the bottleneck/red parts were. It was found that this was due to the non-bottlenecks being occupied with processing material that was being released to keep local efficiencies up, but was not necessary for the final assembly of orders containing the produced bottleneck parts. Churning out maximum units was only clogging the work-in-progress inventories throughout the plant. It must be reinforced here that a system of local optimums does not equate to an optimum system. Activating a resource and utilizing a resource is non-synonymous. Activating resources simply turns them on, whereas utilizing them means using them in a way that moves the system closer to the goal: To make money.

My team developed a material release system for the bottlenecks which served to trigger the release of bottleneck raw material at the rate that the bottleneck is processing the parts. The releases needed to be timed so it would arrive at the bottleneck when needed, or they will be left idle, decreasing throughput. We needed a signal link from the bottlenecks to the release-of -materials schedule. It was identified that it takes approximately two weeks from release of the material parts to reach the bottleneck, which could be added to the setup and process times of the bottleneck to determine release time. Because we were dealing with only one work centre we could average the statistical fluctuations for better accuracy (to within about a day). A three-day stock would be kept in front of the bottleneck for safety.

In calculating when the bottleneck parts will reach final assembly, the material release schedule for the non-bottlenecks can be backwards calculated. This, in effect, means that all the material release schedules will be determined by the bottlenecks. Coinciding the routes in this manner resulted in a large decrease in work-in-progress inventory.

Further results of this new release system:

Revenues are up.

Efficiencies dropped initially, but have come back up.

The backlog of orders is completely gone (satisfied customers).

The response of management at this stage was very positive, but also somewhat sceptical as to whether the progress was only temporary. Therefore an additional request was given for a further increase of 15% revenue within a month. The methods used up to this stage was quite a long way off from those generally accepted and implemented, and there was not yet an opportunity to discuss this. This report will hopefully serve to clarify the methods used.

Once the material schedules had been implemented, there was no need for the tag system anymore. It had served us well in the initial stages, but was now obsolete.

Reduced batch sizes

We have been setting batch sizes according to an economical batch quantity (or EBQ) formula. However EBQ has a number of flawed assumptions underlying it, as will be explained here.

The time spent converting raw material into finished goods, from the minute the material comes into the plant to the minute it goes out as part of a product, can be divided into 4 elements.

Process time is when the part is being modified into a new, more valuable form. Setup time is the time the part spends in waiting for the preparation of the resource that will work on the part. Queue time is that in which the part waits for a resource as it is working on others parts ahead of it. And finally, wait time, which is the time that the part is waiting at assembly for the remaining parts to arrive. Setup and process times are usually a small portion, whereas queue and wait time consumes the majority of the total time. Queue time is the dominant portion for bottleneck operations and wait time is the dominant portion for non-bottlenecks. In both cases, bottlenecks are the dictators of how much time is elapsed.

If we decide to halve batch sizes, we halve the time it will take to process a bottleneck batch, and thus decrease both queue time for bottlenecks and wait time for non-bottlenecks. This then reduces the total time that parts spend in the plant by almost half.

Therefore, after load balancing had been carried out to meet the market demand, without production of excess inventory, the next step was to decrease batch sizes. Reducing the batch sizes reduced total capital commitment in production, lowered the total time that work-in-progress parts spent in the plant and lead to greater speed of throughput along with faster turn-around on customer orders. The resulting increased responsiveness to market demands (from 6-8 weeks to 3-4 weeks) then opened up new sales possibilities.

Increased sales throughput

The constraint of the plant was now no longer in production, but in sales. More orders were required to increase sales throughput. We were now able to present a competitive edge, fast response and delivery times. This 'sales point' was communicated to the sales & marketing department and they were able to generate more customers. A large new sales opportunity presented itself, in which a 1000 product needed to be shipped in two weeks. Previously, this would have been impossible to do without committing the entire plant to this order for two weeks, but by once again halving the batch sizes, we were able to promise 250 each week for four weeks. And all this was done without negatively affecting/hindering existing orders. We managed to tie down the customer using this combination of incremental/weekly deliveries and low quantity pricing. In fact, the customer actually preferred it this way, and was extremely pleased when we were able to deliver the orders on time.

Old accounting rules

At the end of the 3 month deadline, all of the new processes had been put in place and sales had increased significantly. There had been a 17% increase in revenue, and the plant had never been in a better position to make money. Ten spite of this, an internal audit using conventional cost accounting methods showed only a 12.5% increase. It also showed that, as a result of non-bottleneck idle times (lower local efficiency) and increased numbers of setups due to smaller batch sizes, there was an increase in cost per unit and a decrease in productivity. It is therefore necessary to review this accounting method so that it more accurately reflects the bottom line, having learnt what we have learnt. An additional flaw in the conventional calculation process is in the evaluation of inventory costs. It showed the decrease in WIP and finished goods inventory, and decrease in purchased material costs, as period losses, as cash payment avoidance is only recorded in the next accounting period. It is not my area of expertise, so I will not expand on the accounting side of things, but it is worth noting that the current methods do not accurately reflect the plant's status.

Inventory buffers

It was found that, in an attempt to eliminate the idle time of the bottlenecks, we were in fact releasing and processing materials for which there were no current orders. There was in fact an excess 20% capacity at the bottlenecks (and thus the plant) for new orders and greater market share. A large deal with a European customer was presented, but would require selling each product for less than the previously calculated production cost. This, however, was not in fact true, as the production would be used from excess capacity. Thus the only real costs for the products will be the material costs, and if sold anything higher, would make a profit. Therefore low prices combined with shorter delivery times than the European competitors landed us a $20million deal and opened up an entire new market.

In having filled all the remaining capacity of the plant with work orders, another problem presented itself. There was no excess capacity to absorb the statistical fluctuations inherent in production. This was solved my temporarily expediting, using overtime, and then ensuring that the problem would not resurface by placing 'buffers' where they were required, meaning that just enough excess material was queued in front of operations to take fluctuations into account. This, of course, increased the delivery time slightly, which had to be communicated with sales & marketing.

This had been the final step taken before this report was requested.

Conclusion

Throughput world

In summary, the focus of the plant had changed as the diagram below shows. One can say we went from the 'cost world' to the 'throughput world' in our approach to decision making

In the 'cost world' the focus was on reducing costs at every opportunity and using excess capacity to produce excess inventory in an attempt to keep local efficiencies up.

In the 'throughput world', the plant is seen in its entirety as a means to reach a goal: to make money. To do this focus must be increasing throughput whilst at the same time decreasing inventory and operational cost. All three steps coincide, and one should not be focussed on in isolation.

The process of on-going improvement

The process undergone can be summarised in 5 steps, which can be used by managers of all plants, in general:

Step 1: identify the system constraints; Step 2: decide how to exploit the systems constraints; Step 3: subordinate everything else to step two decisions; Step 4: evaluate the systems constraints; Step 5: If in the previous steps a constraint has been broken, go back to step one, but don't allow inertia to cause a system constraint.

Production is an on-going process of improvement, and new problems constantly have to be dealt with as they arise. From what I've learnt, a good manager needs to be constantly asking 'what to change?' and 'what to change to?' The ability to answer these questions, and possessing or cultivating the skills and knowledge necessary to answer them, is the key to great management.