On January 21, 2015, Walter Frick writes in the Harvard Business Review:
One of the U.S. Congress’s first acts of 2015? Trying to redefine what counts as full-time work, from 30 hours a week up to 40. It’s part of the latest attempt by Republicans to alter Obama’s signature healthcare law, the Affordable Care Act, and has already passed the House of Representatives. But it has also had the perhaps unexpected effect of putting the divide between full- and part-time workers front and center in American politics.
I asked former Clinton Labor Secretary and UC Berkeley professor Robert Reich about the debate, and what it means for employers, employees, and the future of American work. An edited version of our conversation follows.
The House has voted to change the definition of full-time work. It seems like the Senate may as well, and Obama has threatened to veto it. Why does the definition of full-time work end up mattering so much to our politics?
It matters under the Affordable Care Act because if full-time work is defined as 40 hours a week, employers can avoid the employer mandate [to provide health insurance] by cutting the work week down to 39 hours. It’s harder for them to do that if full-time work is defined as at least 30 hours. And of course if employers can avoid the employer mandate relatively easily, that means that more workers lose employer coverage, which, in turn, means that more workers have to rely on the government with regard to their health care, either through the Affordable Care Act or through extended Medicaid. That, in turn, puts a large and potentially growing burden on the federal budget, and could cause the deficit to expand.
Government has been a driving factor, along with unions, in defining how we think about what a workday looks like. How has that definition evolved over time?
Much of the tumultuous labor history of the 19th century centered on not just wages, but also on hours. At the center of all of that was the eight-hour workday. When Henry Ford moved to a 40-hour workweek in 1914, that went hand in glove with his increase in wages. And he did both for the same reason: he thought that workers would be more productive and that they would be more satisfied and loyal. And then by the late ‘30s organized labor began focusing on not only wages but also the conditions of work. The great GM strike of 1937, for example, was more about working conditions, such as sick pay and bathroom breaks and so forth, than it was directly about wages. Another important marker was the 1938 Fair Labor Standards Act where you had, for the first time, a national standard, which was a 40-hour work week, time-and-a-half for overtime, a minimum wage. There was a ban on child labor. Social security had been passed just three years before and social security was not just the social security for retirees, it was also disability insurance, worker compensation.
It was clear that in the 1930s we were thinking both about wage security, but also about benefits and various forms of social security, provided by government and ultimately by the private sector as well. I think the big change that started in the 1970s was the change in the employment contract itself, because you had a dramatic drop in the percentage of workers who were in the private sector and unionized. And that meant that you no longer had a system of prevailing wages or prevailing benefits. And we see from the 1970s onward a movement toward where we are right now, and that is more workers who are without benefits coming from their employment contract, whose wages are less a function of collective bargaining than they are a function of the workers’ own individual bargaining leverage, which is extremely small if the worker has no particular educational advantage over any other worker.
Is this part of a larger divide over how full-time workers and part-time workers are treated? Or is it just a specific policy debate over how the ACA works?
It’s a broader debate. The ACA certainly brings it into relief, but the more fundamental question is two-fold. First, are workers assets to be developed or are they costs to be cut? Some employers regard even low-wage workers as potential assets. These employers are not only concerned about the costs of turnover and recruiting and training employees who might otherwise leave, but they’re also aware that employee loyalty and relational capital [are] very important to their business, even with regard to frontline workers or workers who are relatively low paid. Other employers have taken a very different approach. They regard workers as costs to cut. They are concerned that payrolls are too high. They look to cutting payrolls as the easiest and most direct way of improving performance. Some empirical work has been done as to which of these views pays off. I don’t think there’s any question that over the longer term, employers who view their workers assets to be developed do better. But let’s face it, we’re working in a very short-term world right now and so there are many forces that are creating incentives for employers to join the camp of regarding workers as costs to be cut.
The other factor here has to do with what level of employee we’re dealing with. Many firms regard their lowest wage employees as fungible. Even if their view is that their talent represents an asset to be developed, they don’t regard their frontline workers or low-wage workers as talent, they regard those workers as fungible costs. And so you get this second divide in terms of where employers draw the line between their so-called talent and their fungible costs.
Economists tend to talk about health benefits as if they are totally fungible with wages. If that were true, and employers could get away with cutting a 40-hour work week down to 39 hours and no longer having to pay for benefits, they would have to offer superior wages to make up for it. What’s missing in that kind of economic reasoning?
Well, the market is not, by any measure, perfectly competitive. The labor market, especially, has a lot of stickiness to it. Workers are not perfectly substitutable. A lot of people cannot move easily from where they are now working to another opportunity. The labor market also is very weak. There are millions of people who are no longer even in the labor market. They have given up looking for work, but they could come back in if demand picked up. Wages are, as a result, very low. Most of the new jobs that have been created in the United States pay less than the jobs that were lost in the great recession. So for all these reasons the notion that if I, as an employer, have to pay less in one domain or one dimension, I have to make it up in another, simply doesn’t hold true.
What about from the workers’ perspective? In a pre-Obamacare landscape the argument is very clear. If you aren’t able to get health insurance at work, particularly if you don’t qualify for Medicaid and depending on your health status, you may be denied coverage or charged an exorbitant rate in the private market. Today community rating is in place and the private exchanges are subsidized. What does the equation look like from a worker’s perspective, in terms of just how much they really want to be on that employer plan versus the exchanges?
We don’t know very much yet. We’re going to find out a lot more over the next few years. But the preliminary evidence seems to show that workers are relatively indifferent as to whether they’re on an employer plan or are getting their insurance off an exchange. Their real concern is cost, not just the cost of premiums, but also copayments and deductibles. One story that has not been adequately talked about, and a problem that hasn’t been adequately addressed, is that we see deductibles and copayments skyrocketing. Even though many workers today are getting insurance who might not have got insurance before on an employer-based plan, they are indirectly paying a great deal because of the size of the copayments and deductibles. And I might add this is also the pattern with regard to employer-provided insurance.
The U.S. is unique in terms of employer-sponsored health insurance. But does this kind of divide, about the benefits that accrue to full-time workers versus part-timers, have a corollary in other countries?
There’s not much of a corollary. If you look at other advanced nations, you see that, because of various forms of national health insurance, employer benefits don’t figure in nearly as much. Meanwhile there is much more part-time and temporary work in the United States, for a variety of reasons, some of which are sociological. Americans are working longer hours — at least Americans who are employed are working longer hours on average than Europeans or even Japanese workers. American workers are also working on weekends and at night to a much greater extent than European or Japanese workers. In fact the latest data show that 29% of American workers work weekends, 26% work at night. These are the highest percentages of any industrialized nation.
How does all of this relate to the challenges that we might face if the “sharing” or access economy – Uber, Airbnb, etc. — ends up growing?
It’s definitely growing. We’re seeing, with regard to a majority of workers in America, that they are moving toward a world in which they have few, if any, employer benefits. They are more freelancers and independent contractors, temporary workers, and part-time workers. Their remuneration is set in what’s essentially a price auction, a spot auction market, [and] varies from day to day or even from week to week. Now if you’re young and well-educated, this kind of a system may be quite attractive. There are many young, well-educated people who don’t want to be as regulated as a typical full-time employee. They want to be more entrepreneurial, they don’t mind that they have to pay for the equivalent of all their benefits. But the older the worker, the less educated the worker, the more the worker has a family or other dependents, that worker is likely to regard this emerging system as a far greater burden and a far greater challenge, because most people have bills they have to pay that are fairly regular. But if their compensation is variable, they can find themselves in very big trouble.
I want to go just back to one point you alluded to awhile ago. Many of the benefits that became part of the standard American labor contract date back to World War II and the days when about a third of our private sector workforce belonged to a union. Those benefits were included in contracts for two important reasons: first because there were price controls and the benefits were a way of circumventing the price controls, at least with regards to labor. The second reason had to do with taxes. It became a very attractive feature of labor contracts to provide these benefits, because many of these benefits were not taxed, as were regular salaries and wages. So that much of the benefit structure we have today is a legacy of the Second World War.
If you could wave a magic wand and put in place the support and welfare policies that you believe are necessary, would you be comfortable with this broad idea of switching risk mitigation from the company to the government, and letting the contracts between workers and companies be more fluid? Or do you think that there’s a risk in going in that direction?
I would definitely support that direction. I think that’s the direction we have to move in, for a number of reasons. First, the benefit structures that we still have in place amount to a very large tax subsidy going to the highest paid workers. Most low-wage workers are not getting tax-subsidized benefits. That makes no sense — that’s an upward redistribution that is socially unjustifiable. Secondly, these benefit systems are very inefficient. They keep people locked into jobs that they don’t necessarily want, or they prevent people from taking opportunities that might otherwise be available. They impede mobility in the labor market. I think it makes a great deal of sense to move away from these employer benefit systems to benefits that are provided through government, either directly or indirectly.
https://hbr.org/2015/01/robert-reich-on-redefining-full-time-work-obamacare-and-employer-benefits
While the focus of this article is on labor and income from jobs and benefits, it fails to assess the result impact that tectonic shifts in the technologies of production is causing––specifically the destruction of jobs and shift from labor intensive to capital (non-human) intensive input, as well as the devaluation of the worth of labor’s input.
Full employment is not an objective of businesses. Companies strive to keep labor input and other costs at a minimum in order to maximize profits for the owners. They strive to minimize marginal cost, the cost of producing an additional unit of a good, product or service once a business has its fixed costs in place in order to stay competitive with other companies racing to stay competitive through technological innovation. Reducing marginal costs enables businesses to increase profits, offer goods, products and services at a lower price, or both. Increasingly, new technologies are enabling companies to achieve near-zero cost growth without having to hire people. Thus, private sector job creation in numbers that match the pool of people willing and able to work is constantly being eroded by physical productive capital’s ever increasing role.
Over the past century there has been an ever-accelerating shift to productive capital––which reflects tectonic shifts in the technologies of production. The mixture of labor worker input and capital worker input has been rapidly changing at an exponential rate of increase for over 235 years in step with the Industrial Revolution (starting in 1776) and had even been changing long before that with man’s discovery of the first tools, but at a much slower rate. Up until the close of the nineteenth century, the United States remained a working democracy, with the production of products and services dependent on labor worker input. When the American Industrial Revolution began and subsequent technological advance amplified the productive power of non-human capital, plutocratic finance channeled its ownership into fewer and fewer hands, as we continue to witness today with government by the wealthy evidenced at all levels.
People invented tools to reduce toil, enable otherwise impossible production, create new highly automated industries, and significantly change the way in which products and services are produced from labor intensive to capital intensive––the core function of technological invention. Binary economist Louis Kelso attributed most changes in the productive capacity of the world since the beginning of the Industrial Revolution to technological improvements in our capital assets, and a relatively diminishing proportion to human labor. Capital, in Kelso’s terms, does not “enhance” labor productivity (labor’s ability to produce economic goods). In fact, the opposite is true. It makes many forms of labor unnecessary. Because of this undeniable fact, Kelso asserted that, “free-market forces no longer establish the ‘value’ of labor. Instead, the price of labor is artificially elevated by government through minimum wage legislation, overtime laws, and collective bargaining legislation or by government employment and government subsidization of private employment solely to increase consumer income.”
Furthermore, according to Kelso, productive capital is increasingly the source of the world’s economic growth and, therefore, should become the source of added property ownership incomes for all. Kelso postulated that if both labor and capital are independent factors of production, and if capital’s proportionate contributions are increasing relative to that of labor, then equality of opportunity and economic justice demands that the right to property (and access to the means of acquiring and possessing property) must in justice be extended to all. Yet, sadly, the American people and its leaders still pretend to believe that labor is becoming more productive and ignore the necessity to broaden personal ownership of wealth-creating, income-producing capital assets simultaneously with the growth of the American economy.