Meta Cities: Repurposing Where We Live and Work

Harvard Business Review recently released a 2023 talent management piece by Richard Florida from the University of Toronto and Vladislav Boutenko, Antoine Vetrano, and Sara Saloo, all with Boston Consulting Group, entitled The Rise of the Meta City. Their thesis reveals an emerging development in the evolving work-from-home (WFH) paradigm that is novel and worth considering as we envision the future of both our careers and where to become a resident.

It is no secret that mobility-enhancing technologies combined with the face to face limitations wrought by the Pandemic resulted in a rapid expansion of remote work. From approximately 6% of the American workforce working remotely in 2019 to 18% by 2021 shows how briskly the phenomenon swelled. A recent BCG survey from August 2023 indicates that only 7% of companies require full time return to work whereas 8% of companies have discarded offices completely. This means the vast majority of business are operating with some form of hybrid working.

A consequence of the proliferation of WFH employment is that many more digitally-centric employees are choosing to live outside of the traditional commute radius from their employers’ offices. With customary commutes being curtailed, workers are incentivized to look at residential options in areas that are more affordable and which feature a higher quality of life. For example, a LinkedIn study identified small to mid-sized cities receiving WFH transplants such as Springfield, MA, Tallahassee, FL, Portland, OR, College Station, TX, and Wenatchee, WA. Some locations actually offer cash incentives for WFH employees to move there like Tulsa, OK and Perry County, IN.

This realignment of workers from office to home and from employer-based cities to increasingly distant residential locations is starting to reveal patterns. A significant new template emerging is the rise of what Florida et al call the “Meta City”. Initially, it is helpful to think of meta cities as not entirely fixed geographically. The old inner city to suburb to exurb to rural model is not applicable here. Rather, the dimensions of the meta city extend from a major economic hub city to a host of far flung smaller cities in other parts of the country or globe. Modern telecommunications technology and talent flows allow for cities which may be geographically separate to operate as distinct units economically.

Some examples are called for to better visualize this spectacle. New York City is a top-dog economic hub in a number of industries, but most importantly in the finance sector. Financial talent flows into and out of NYC most measurably with other American cities like Los Angeles, Miami, Chicago, Washington, DC, and Atlanta, among others. This hub and satellite configuration comprises a finance meta city. London, too, is major finance hub with Manchester, Birmingham, Dublin, Edinburgh, and Cambridge serving as financial talent satellites. San Francisco is a principal technology hub city connected to smaller, but also tech heavy cities like Austin, Seattle, Boston, and San Diego.

The concept of talent flow is crucial to an understanding of the growth of meta cities. The flow of talented employees refers to physical mobility of people among the cities of the meta unit and also to remote contributions made by talent within the unit. To illustrate, Emily retains employment with Company A in New York, but chooses to live and work from Miami because of the high cost of living in New York and its long winters. Jason also works at Company A in New York where he intentionally lives because he loves the vibrancy of the city, and from there collaborates with Emily on a daily basis as part of a development team.

Although Florida et al do not refer to rural living, presumably the meta cities are speckled with geographically dispersed talent who “work” inside of meta cities, but live in a variety of non-urban locations.

Meta cities are an interesting outgrowth of the remote working trend, a glimpse into how the new generations choose to live and work, and also how the economy of the twenty-first century is coming into its own.

 

An AI Bill of Rights

Often it is difficult to separate living from working. Our personal lives and professions can become intertwined such that it can seem pointless to differentiate those aspects which are personal from professional. Such is the case when considering one of today’s hottest topics, the impact of artificial intelligence. Is AI going to sway our lives in general or be mostly an employment issue? A fair prediction is that AI is going to change the landscapes of both our lives and of our work. 

As citizens and as workers we should have a strong say in what the influence of AI is going to be in our daily lives and on our jobs. The disruptive potential is too huge to leave AI development solely up to engineers and their corporate employers. If AI advancements are to be the result of free market innovation, then those of us who are future customers and recipients of its consequences should have the freedom to weigh in and heavily influence its maturation. 

A practical way to approach this challenge is through the lens of individual rights. Ever since the seventeenth century philosopher John Locke proposed the existence of fundamental natural rights, such as of life, liberty, and property, we westerners have organized our social, political, and economic institutions around the notion of personhood rights to both preserve and extend the enjoyment of our lives. We bestow upon ourselves the rights necessary to live fruitful lives free of destructive intrusion. Now is the time to apply these rights in the face of AI infiltration. 

A useful place to ground a national debate about AI’s proliferation is with the Biden Administration’s White House Office of Science and Technology Policy’s proposal known as the Blueprint for an AI Bill of Rights (https://www.whitehouse.gov/ostp/ai-bill-of-rights/). This is a thoughtful approach to identifying the key areas of contention in the planning, application, and mobilization of AI-based automated systems. 

Five principles are presented as foundational to designating what constitutes an AI Bill of Rights. To summarize: 

Safe and Effective Systems: An AI system should undergo input and testing from various sources to ensure its ability to deliver value free from the risk of malicious or unintended consequences. Humane industry standards and protective measures should apply, including the power to shut down harmful applications. Data usage is to be transparent, necessary, and respectful of personal integrity. 

Algorithmic Discrimination Protections: The biases, inequities, and discriminatory practices of people should not migrate to automated systems. Indefensible digital treatment of people based on their individual differences is to be considered unjust. Legal protections of ordinary citizens and farsighted equity assessments of intended and unintended uses of systems should be crucial in the design and deployment of AI systems. 

Data Privacy: This concern has been with us since the advent of Web 2.0. People should have ownership and agency over their data. The right to privacy is strong among free and independent people. This should be reflected in the automated systems they use. Exercising consent and having the ability to opt in and out of these systems with no restrictions should be inherent in their development. 

Notice and Explanation: It should not take a computer science degree for ordinary users to understand what they are getting into with AI systems. Clear and unambiguous language that informs operators about system functionality, intent, outcomes, updates, and risks are to be considered basic. 

Human Alternatives, Consideration, and Fallback: In short, when a user determines that an automated system has become too unwieldy or its functionality too untenable, then he or she should be able to have access to a real person to help them. No one should feel trapped within the confines of an all-powerful system they do not understand and cannot properly operate. 

These principles could become a friendly conversation starter. As citizens we need a simple tool to unify the discussion as we confront this significant challenge. This AI Bill of Rights could be it. 

We Are More Than Checklists

Back in 2009 a well received book was published called The Checklist Manifesto by Atul Gawande, a surgeon, author, and public health researcher. The book promotes the use of developing and utilizing checklists to enhance the quality of outcomes resulting from the execution of complex procedures. Dr. Gawande cites many examples of how the deliberate use of checklists leads to greater efficiencies, more uniform discharge of protocols, and improved protections, particularly regarding procedures in which safety is a concern.

Upon examination, causes of unintended consequences and accidents can often be attributed to missed steps in a process, which had they been followed would have mitigated or prevented the mishap. Sure, we all make mistakes. But if we take the time to analyze why a mistake was made, we often find it was because of things like hurrying too much, lacking focus, being distracted, or not having enough experience. These flaws almost always mean measures that should have been taken were not taken.

So, to deploy and to use complete checklists consistently makes perfect sense. In fact, the application of step by step lists is considered so best-practice these days that many of our careers can be seen as little more than a requirement to effectively execute a series of predetermined sequential actions. Take a look at almost any job description. It is little more than a laundry list of expected deliverables like a set of boxes to be checked. It could be said that much of our work is therefore formulaic.

To the extent that we reduce our careers to predicable, stringent, and rote to-do rosters, the more accommodating we make our careers for AI replication. Author Ian Leslie makes an interesting observation in a recent Substack piece. Responding to the fear many express about the growth of AI he points out how we assist the machines to adapt to our ways of doing things because we are adapting our work lives to the ways AI works. When human agency is overly systematized we give our replacement instructions to AI which may be better at checking boxes than we humans are.

When we model our work behavior to a simple inventory we should not be surprised when AI mimics it. AI is algorithmic. It uses models and arrangements of variables in a mechanized and calculated way. As we are finding out, AI can out-perform us over a growing number of jobs, especially the jobs that are like checklists. A pertinent quote by artist Robert Irwin in the Ian Leslie piece is, “Human beings living in and through structures become structures living in and through human beings.”

As we determined above, checklists certainly have their place. However, as people we need to look at our work lives as being beyond just an amalgamation of discreet work tasks and responsibilities. To be human, especially in our careers, must be more than that.

Our evolution requires innovation and novelty. It demands an expression of humanity which is an added value above any pre-arranged framework. It seeks to celebrate intuition and ingenuity and even uncertainty. The careers of tomorrow will thrive because they bring a richness of the human experience not easily cloned by a computation.

Romanticism arose in Europe toward the end of the eighteenth century in reaction to the heavy emphasis being culturally placed on rationalism, science, and industrialization. Instead Romanticism insisted on honoring art, music, literature, nature, and the intellectual capacity of the individual. It exulted human emotion and aesthetic experience. Above all, the message of Romanticism was that to be fully human required embracing the wide range of human expression and to not be limited to the mechanized worldview of materialists and rationalists.

The time may be ripe for a neo-Romanticism in the age of AI and checklists. Efficiencies have their place. But let’s not confuse them with being human.

 

 

Love Video Games? Make Gaming a Career With These 6 Tips

Another Guest Post from contributor Leslie Campos

Photo by Michael Boskovski on Unsplash

 

Video games are an enjoyable hobby, but what if you could make gaming into a career? With the right skills and education, it might be easier than you think to build the career of your dreams. Bill Ryan Writings offers this career development advice for gamers who want to make their passion into a profession.

Plan Carefully

The video game industry involves countless careers and job paths. Since you want to make quality decisions in planning your career, explore the options carefully.

 

All types of roles support video game development, including art, technical, programming, engineering, business, and marketing positions (and many more). Consider your interests, strengths, and possible job paths.

 

Then, determine how much time and energy you can invest in education and skill development.

Build Skills

Playing video games is practically a prerequisite to building a career in the gaming industry, but it’s not the only requirement. Playing games does build many soft skills, notes ZenBusiness, but to be competitive in the job market, you also need to hone skills related to your career path.

 

For example, learning to code, use editing software, and check for bugs is crucial in video game careers. Yet the specific skills you need will depend on the role you want to work in. The good news is that many skills are ones you can build on your own.

 

For example, you can self-study to become fluent in computer programming languages and begin coding projects. Practicing various types of art and graphic design could improve your craft. Yet formal education may still be an important step in building your career.

Get a Degree

For some job opportunities, you might need more than casual skill-building to get an interview. Earning a degree in graphics, software engineering, game development, or another technology discipline could make your resume stand out.

 

Online degree programs let you study and earn a degree while working and maintaining a personal life. Choose an accredited school with competitive tuition; this could be the ticket to an affordable education and a new career path.

Network Online

Gaming, as both a hobby and a career, is popular around the world. That makes it easy to connect with people you can learn from and share ideas with. Video game communities exist for every type of game, as Game Designing outlines, and joining them can help you find opportunities and network.

 

Gaming clubs may also be a way to get feedback on your work. Sharing with a gaming group could help you polish up a project for your portfolio, increasing your odds of getting a gaming gig.

Create a Resume & Portfolio

Writing a clear, professional resume is the first step in any job search. Use the resume format that best fits your experience, whether chronological, functional, or hybrid. Include relevant keywords for the gaming industry, and highlight your skills, certificates, and education.

 

A strong resume is a must for any job search, but a portfolio levels up your application, especially in the gaming industry. But because video games or graphics are hard to insert into a resume, take time to build a portfolio site to display your work.

 

Buying a domain name and creating a website may sound like a lot of work, but it’s the best way to design a professional portfolio. If you code the website yourself, it can also serve as a portfolio piece.

Apply to Jobs

With the right skills, community, and degree, finding a job might be the easiest step in your gaming career journey. Especially if you enroll in a degree program, internships are readily available for on-the-job experience and skill-building.

 

Or, you can apply to be a video game tester, start in an entry-level quality assurance, art, or journalism job, or join a gaming company in an administrative or support role to get in the door.

 

A career in gaming might seem like an unconventional path. But for people who are passionate about video games, developing skills and even pursuing a degree will be worth the effort. The result is a professional path you will love and grow in.

Flextime Workplaces: An Update

As has been widely reported over the past couple of years, workplaces, particularly in the knowledge economy, have either undergone or are being pressured to add flexibility features to their operations. The combination of Covid-related adjustments and technical innovations has resulted in a reassessment of what productivity and by extension appropriate workplace agency looks like in the modern workplace. 

A 2021 Ipsos survey revealed that globally 30% of workers would attempt to leave their jobs if required to return to the pre-pandemic office setting. Many of the ever-plugged-in younger cohort of workers see only an upside to having jobs with flextime. Benefits such as managing the complex demands of modern living, taking care of children and elderly parents, reducing commuting time, and functioning when one is most energetic and constructive during the day are among the advantages cited as desirable with pliable scheduling and task requirements. 

Flextime features are now much more present in recruiting job descriptions. Some of this is undoubtedly because of the increased demand for flexibility from a workforce that seems to be sorting itself into those oriented toward results-only vs. traditional workplaces, but also due to the uncertainty of the future. Covid has not completely gone away and with further environmental changes said to be coming from climate change, who knows what is next? Disruption is at least as likely as stability when planning operationally. 

However, workplace changes of the sort being described here need to be assessed and designed thoughtfully. It can be easy to dump on traditional workplaces as having rigid, arbitrary, and ineffectual routines, like for example, habitually scheduled staff meetings laden with fill-in blah, blah, blah. Yet, as resiliency transformations occur it can be useful to see not only what is gained, but also what is lost by such modifications. 

A case could be made that as customary practices dissolve not all the consequences may be necessarily positive. Of key importance is what it means to be professional. Parameters were established over time to separate work life from non-work life. We got used to sliding in and out of work modes with a regularity that brought predictability, certainty, and some semblance of balance. 

One negative element of blurring the distinction between work and leisure time is the always “being on” phenomenon. When flitting in and out of work mode multiple times per day, including answering supervisor emails at 8:30 pm and being ready to respond to the Amsterdam office at 6:30 am, cumulative work time can approach 10-12 hours. It begs the question of who benefits. Probably not the worker. 

Also, professional norms and protocols used in performance reviews and advancement decisions have been based on an in-person work context. Are the expected actions of workers who work from home holding up fairly to legacy achievement standards? Managers still wedded to the notion that time on task always equals productivity may be less inclined to favorably view fragmented work as effective, even if the results are of similar quality or perhaps even better than before. 

This can be especially problematic for new hires onboarded with a company practicing flextime. How well can management really get to know their direct reports when they are working remotely? Perhaps fine — or perhaps not. New workers are motivated to do well at their new jobs and are trying to navigate expectations and learn company culture digitally. Might they be ripe for various types of exploitation, such as working exceptionally long hours or having to face other unreasonable demands from management or co-workers in a flextime environment? The possibility is certainly there. 

Decentralization does have its benefits. But it also could have liabilities. As we redefine what it means to be professional in a flextime world, we need to be mindful of how to achieve efficiency in a way that rewards both management and front-line workers. This challenge is a subset of organizational agility and a crucial one going forward. 

Questioning the Future of AI

When I drive my E-ZPass-less car through the tollbooth on I93 in Hooksett, NH, I intentionally swing to the right to hand a dollar to the tollbooth attendant. When checking out from a shopping trip in a big box store, I prefer paying a person at a cash register rather than using the self-serve payment scan system. 

It is not that I am some sort of crotchety Luddite who shuns digital progress. I pride myself on maintaining some decent level of technical functionality as I age. But I have come to question why those who design and build our Artificial Intelligence (AI) systems are obsessed with things like automation. In fact, the more I investigate AI the more surprised I am that AI is being utilized so narrowly, unevenly, and menacingly. 

The AI movement is powerful, significant, and potentially authoritative regarding how our personal and work lives will be lived in the coming years. The scale of its reach places it in a class far beyond the technological tinkering improvements we generally see with new phone models or app developments. Machine learning is far more enigmatic than a better video camera or gaming platform. 

Momentous changes are likely in a broad range of fields from mechanics to medicine and are expected to reshape work and modify markets. Many of these transformations will be welcomed, perhaps cherished, but others perhaps should not happen at all. 

When looking at AI today it seems too much of it is focused on building systems that either automate functions, collect data, or conduct surveillance. This should be concerning. The likelihood of jobs being lost, governments and companies holding vast quantities of our personal information, and our personal freedoms becoming threatened is not some far-fetched paranoid delusion, but an ugly scenario we should work to prevent. 

There is progress and then there is degeneration. AI could give us either or both. As an analog, I think of my attitude ten to fifteen years ago about social media. Then, the crowdsourcing of unregulated input from the global community augured richer and more transparent conversations about any number of topics. Or so I thought. Today social media looks like a cesspool of disinformation and disgruntlement ushering in social breakdown. Not all innovations should be welcomed. 

In our democracy, while we still have one, the general public needs to be actively engaged in monitoring the AI powers that we have and weighing in on policies to determine what AI engineers develop. Living with a laissez-faire attitude of, ‘Well, whatever the markets come up with will be fine. Markets know best.’, can lead to costly and offensive ruptures in the very framework of society. Citizens should insist that AI be deployed in a generally advantageous manner as described by utilitarian philosophers like Jeremy Bentham — “the greatest amount of good for the greatest number”. 

Instead, it looks like AI development is being driven more by the acquisition of corporate profit and power than by what benefits society. One does not need be a wild-eyed Socialist to question whether a disruption as encompassing as AI could potentially pose hazards to society. Those who control the development and deployment of AI will have a lot of authority and say in how our economy operates and how our future day-to-day lives are experienced. Concentrations of power have traditionally been held suspect in America. Well, we have one in the making. Let’s pay attention. 

The ultimate direction AI takes does not have to be decided solely by engineers and corporate C-levels who find business in selling only surveillance and automation tools. AI could be targeted to complement and improve the work done by real people, while also creating new activities and opportunities that keep workers gainfully employed. We have a choice — let AI rule us or we rule it. Hopefully, we will choose wisely. 

Strengthening Knowledge Sharing Online

The news is not that we are continually shifting most of our knowledge-economy work time online, but rather that we are learning more over time about what works and what does not work when doing so. Take the Training & Development (T&D) field. Here is an industry which experienced a head start long before Covid in providing digital and distance learning opportunities. By designing and preparing virtual and hybrid instruction programs for a relatively long period it is reasonable to expect there are lessons which can be derived by this industry informing other business sectors about how to disseminate intelligence in an online environment. 

Another area sharing distance learning, admittedly more than they want to currently, is the education arena in both K-12 and higher ed. Like T&D, their shared mission is to leverage the power and ubiquity of computers and similar devices, along with the public’s basic tech literacy abilities, to deliver teaching and learning possibilities when it is impractical to house students in traditional classrooms. Here too, best practices are being identified as teachers, schools, and communities face the challenge of providing quality education online. 

Together T&D and education are revealing methods and conditions to consider establishing when the online workplace involves information sharing, change management, customer engagement, and staff development. An analysis of peer-reviewed literature, the T&D/education marketplace, and anecdotal reports from distance learning practitioners suggest key practices when formulating and implementing remote instruction courses and programs. However, it is insightful to understand the finest of these procedures are not merely disjointed techniques produced through trial and error, but rather rest upon a philosophical foundation. 

Lev Vygotsky was a Soviet-era psychologist renowned worldwide to this day for his scholarship on how humans make meaning, in other words, cognitive development. His theory in short is that people acquire cultural values, beliefs, problem-solving strategies, and practical knowledge through collaboration with others, especially more knowledgeable people. Comprehension and meaning, according to Vygotsky, is derived in a social context, which makes community the fertile ground from which people learn. 

Today, Vygotsky’s theory compels developers of online educational and training curricula to migrate characteristics of in-person community to the digital environment. In doing so, instructors and trainers are better able to facilitate concept and knowledge acquisition among their students and trainees. 

We need therefore to trust in the interconnectivity and interplay possible through virtual contact. Although still a novel concept for older generations, society is clearly moving toward a norm characterized by remote connections with others, whether through our use of social media, FaceTime, or online short-term credentialing courses.  

Three ideal practices which take advantage of social cohesion include: 

Being Present – This can range from presenting direct instruction in a synchronous or live-time manner to being available for individual student/employee questions to mentoring. There will be occasions where asynchronous (non-live time) communication, such as message boards, forums, and course policies, need to be visible for all participants, but in general being directly available or on call during set hours leaves participants feeling less abandoned and insecure. 

Interactions – Encouraging participant interaction advances information sharing and social learning, which leads to literacy. Three key dialogues to learning involve teacher to student, student to student, and student to content. Promoting such exchanges generates effective growth-oriented connections among teachers and students; purposeful explorations conducted within a student-to-student context; and investigations between a student and the topic areas’ facts and concepts. 

Discussion – Promoting opportunities for students to participate in synchronous and asynchronous discussions creates substantial educational value. Encounters involving questions, reflections, responses, and decisions support participant growth. Thanks to digitization, well-structured discussions and deliberations can strengthen any course. 

When tasked with planning for distance training and teaching opportunities keep in mind the importance of generating social coherence. You may find less has been lost going virtual than you initially feared. 

Distributive Work Gets A Boost

One of the significant consequences foisted upon the economy during the Covid-19 outbreak has been the rapid scaling of work completed outside of the office, i.e., at home. What is commonly known as remote work, now increasingly being referred to as distributive work, has been increasing over the past twenty years or so. But in its short history it never has experienced a shot of practice like it is getting now. 

My guess is that distributive work is conventionally thought of across most businesses as secondary in its productive impact relative to being onsite, not unlike the way online courses have tried shaking off their reputation of being course lite. However, the severity of social distancing to break the chain of virus transmission is forcing the knowledge economy to rely on high quality distributive work to stay alive as never before. Indeed, it is in the knowledge economy, comprised of smart and skilled workers producing goods and services worldwide, where distributive work holds its greatest promise. 

It may be useful to know the thoughts of someone who has pioneered and cultivated distributive work for years and is now a leading voice in the movement. Matt Mullenweg was one of the founding developers of WordPress, the digital content management system, and founder of the diversified internet company Automattic with ~1200 employees distributed over 70 countries. He continues to not only evangelize distributive work but leads a set of companies that practice it daily. 

He is also convinced distributive work need not be just an off-the-shelf option management reaches for during times of disruption, but a model of productivity capable of surpassing the performance of traditional office-setting work. 

Mullenweg promotes worker autonomy as key to motivation and efficiency and is much more concerned with worker output than input. While retaining some in-person collaboration, but in a much more reduced and targeted manner, he recognizes the impediments of cramming a lot of people onto a single site. A myriad of distractions such as office politics, intrusive co-workers and managers, long off-topic chats with co-workers, shared facilitates, a narrow set of expected in-house behaviors, and a feeling of having little control over likes and dislikes from the office temperature to the smell of someone’s lunch can all negatively factor into the worker feeling a lack of autonomy. 

With that in mind he identifies five levels of distributive work from low to high effectiveness. To quickly summarize: 

  • Level 1, which is now old-school, has workers using telephone and email offsite to augment their work, but with the belief that the “real” work is done at the office. 
  • Level 2 is an attempt to recreate the office elsewhere by use of VPN and conferencing software to supplement voice and email. Most business is still mired in levels 1 and 2. 
  • Level 3 demonstrates an intentional effort to adopt the best software and equipment available to share knowledge seamlessly and transparently across the organization. This can include good lighting, microphones, and communication tools like Zoom, Slack, and P2. 
  • Level 4 places a premium on asynchronous and written communication, meaning to move away from an over-reliance on live interactions. The goal here is to improve the quality of decision making even if its pace is slowed. 
  • Level 5 is where production capability is shown to be measurably improved over traditional work methods. 

Mullenweg contends the manufacturing factory model of all employees looking busy at the same time and in the same place does not always translate well into the cognitive economy. By valuing quantifiable and qualitative output primarily and providing workers with the means necessary to cooperatively join forces across distance the “workplace” can be not only redefined but rendered more fruitful. 

Looking for a humane and profitable opportunity amidst a global contagion may be difficult. Perhaps, refining distributive work is one such occasion. 

Weaponizing Employment Against the Poor

Albert Einstein elegantly once said that the definition of insanity is doing the same thing repeatedly but expecting different results. This adage comes to mind when we see that yet again work requirements are being used as a bludgeon to combat Americans who live in poverty and who need safety-net programs like Supplemental Nutrition Assistance Program (SNAP), HUD housing assistance, and if President Trump has his way, even Medicaid. 

The White House Council of Economic Advisers has recommended work requirements for the most extensive welfare programs and the current administration has mandated that federal agencies alter their presumably lax welfare program standards. These moves are premised on the continuing notion that the poor are a drain on federal resources due to their laziness, recklessness, and lack of ambition. So here we go again, concluding that the poor are so, solely because of their own deficient behavior and must be made to work harder to receive assistance from this government. 

It is not that simple. 

Is this work requirement approach fair that those recipients of aid (excluding children, elderly and disabled) should be made to show an attempt to earn their government supports, which allegedly incentives people to not be poor, or is this a kick to the poor and disenfranchised when they are already down? 

It is worth examining a few of points about welfare work requirements: 

  1. According to the US Census Bureau the 2017 poverty rate was 12.3%, a 0.4% decrease from the year before. Since 2014 the poverty rate has fallen 2.5%. So, if the current trend line is a declining poverty rate why is a harsh condition like work requirements for the poor necessary currently?
  2. This effort was last tried under Bill Clinton and Newt Gingrich with their 1996 welfare reform legislation. We have had a couple decades to see how that has gone and studies like those from the Center on Budget and Policy Priorities and in the book Making Ends Meet (Edin and Lein) show that despite short term marginal improvements in employment they were not sustainable, mostly due to necessary and increased living expenses, absorbing any work generated financial gains.
  3. Where are these jobs that the poor are supposed to get? If you have spent most of your life in poverty, chances are quite low you can pick up a knowledge-economy job quickly. We have all heard how the traditional manual labor jobs are drying up, so what is left? Lousy-waged part-time jobs with unpredictable and changeable hours is what’s left.
  4. If the government feels the need to pick on somebody shouldn’t it be the employers of vast numbers of unskilled and low-skilled who pay their workers, including the working poor, insufficient wages that in turn need to be underwritten by the American taxpayers?

Now one place where there could be political agreement is in the government providing subsidized high quality work training requirements targeted to helping the poor get the knowledge and skills needed for a globalized and digitized economy. Currently, training requirements can be in lieu of work requirements, but their effectiveness remains questionable. 

The causes and cures for poverty are varied, complex, and far beyond the scope of this piece. But if we as a society are truly interested in ameliorating the condition of poverty (as we should be!) we need to be looking for demonstrably beneficial interventions that measurably make positive differences. Requiring the poor to get a low-end job that increases their childcare and transportation costs just to prove they are not milking the system or making them pay unreasonably for a hand-up from those of us with tax paying means is not a humane way to go about it. 

Applying Technology in Hiring

Human contact, whether through professional networking, social connections, or by earned reputation still matters significantly and should in no way be minimized when describing the recruitment and hiring process. If anything, it is paramount. However, another very important track to cover when developing one’s career is the one driven by existing and emerging technologies meant to streamline and optimize the employment process. 

Today this ranges from online job boards advertising positions to Applicant Tracking Systems (ATS) that parse resumes for HR and recruiters. Also, Artificial Intelligence (AI) and machine learning tools, designed to assess the employability of candidates, are now present.  

How to advantageously position yourself for these digital aides and gatekeepers needs to be a key component of a well-planned career growth strategy. Let us take a current look at each of these technical features. 

Online job boards are not very new, in short supply, or complicated. They are little more than interactive web sites that post job descriptions from employers. More recent are job search engines like Indeed and Simply Hired that rummage the internet aggregating job postings from a variety of sources. 

These sites are seductive in that they give the appearance of a job store with profuse amounts of positions just ready for you to pick up while shopping. A common and ineffective ploy is to spend hours responding to jobs on the boards with the only thing generated being recruiters trying to lure you to high turnover 100% commission sales jobs.  

Nonetheless, working with job boards is not a complete waste of time and decent jobs can be yielded. Recommended is to spend about 10% to 20% of your job search time utilizing the boards while being careful and discriminating about what you respond to. 

ATS software allows recruiters to organize vast lists of applicants and their pertinent criteria such as qualifications, employment history, degrees earned, etc., which are most useful to hiring managers when determining who to contact for interviews. For those of us trying to secure an interview we need to be mindful of preparing resumes (and LinkedIn Profiles) that are keyword-rich with contextually used terms aligning our skills and knowledge with responsibilities and deliverables mentioned in job descriptions. 

Therefore, given the need for an ATS-friendly resume that simultaneously is attractive for human readers the challenge is to strike a visually appealing format that won’t confuse the ATS. This can be tricky. If you want a designer resume that looks like those on Pinterest, then forget about passing ATS muster. And with so many companies employing ATS the best strategy may be to pay homage to the many conditions needed to not be digitally rejected in a millisecond, while adding enough optics, and of course solid content, to not have your resume look like just another slice of white bread. Achieving this level of resume optimization is a necessary goal. 

The latest trend, which is expected to proliferate in use and sophistication, involves the impact of AI in hiring decision making. There is a growing perception that relying on a candidate’s skills alone is not consistently producing better employees. The evolving thought is to assess personality more with the goal of finding a well-rounded and compatible colleague.  

To this end, AI is being deployed to identify personality traits gleaned from resumes, online profiles, social media presences, video appearances, you name it. Apparently, this is seen as less biased than human observers. We shall see. (Cannot algorithms be biased too?) 

At any rate, developing a consistent brand and value proposition that includes both your technical talents and your work style/interpersonal characteristics across all platforms may be wise for presenting to human and technological appraisers alike. 

Being prepared for the changes and encroachment of technology into hiring decisions, and by extension career development, has become imperative in today’s employment world. 

A Reason for Employment Inequality

Much is made of the dearth of economic opportunity and income equality across the U.S. workforce. Though a perennial issue, the conventional wisdom these days more than most appears to be that there are segments of the American population for whom high paying jobs are elusive or non-existent. This belief persists despite the lowest unemployment rate we have seen in nearly twenty years. 

The primary reason, we are told, for this situation boils down to the fact that an automated, globalized, and corporate-led economy produces winners and losers — a somewhat different set of winners and losers apparently than the more nationally-based economy of yesteryear. 

Inequality, or even the perception of it, tends to raise the hackles of key constituencies such as left-leaning individuals and nowadays working class folks who find that many low to mid-skilled jobs are evaporating. These groups agree there is a fundamental unfairness to inequality, and they are inspired to fight against it, sometimes in dramatically different ways, whenever possible. 

One element of inequality that I do not see getting too much attention however pertains to the number of people with a college education vs. those without one. As we look over the last half century or so we can see that this is a significant economic phenomenon. Indeed, the discrepancy between those with and without higher education impacts a variety of inequality factors, including not just income, but housing, community makeups, cultural upbringing, socioeconomics, and social status. 

The number of working-aged Americans with college degrees is steadily rising and now is at or slightly above 40% according to the Lumina Foundation. That is ten times the number compared to a hundred plus years ago when Andrew Carnegie, of all people, claimed college was irrelevant and even damaging. Despite the high cost of college, projections are that attendance will continue to grow another 15% by 2025 (Inside Higher Ed). 

Bruce Cain, a Stanford University political scientist, points out that people with knowledge-based characteristics attributed to being college educated, such as professionally oriented behaviors, digital familiarity, an understanding of financial services, and innovative inclinations, tend to congregate residentially and in employment. In today’s world the “Haves” are most often the ones with a college education, and they like to stick with and hire others of their own kind. It is easy to see how this can appear unequal. 

Many Baby Boomers were raised with the notion that getting a college education would lead to greater economic gain. Although the message is more nuanced these days the central point remains the same. One unintended consequence of this virtue is that it also leads to economic inequality and resentment among those not sharing in the bounty. This acrimony can sometimes be heard among those who have taken an anti-intellectual / anti-education stance, such as when expressing skepticism (to put it politely) regarding the viewpoints of the “elites” and the “establishment”. 

Addressing this imbalance requires initially a level of respect and acknowledgement that we all have something of value to offer. Working toward an economic system that honors and tries to achieve an opportunity-for-all ethic could arise from such a belief. Those who benefit from the hard work and commitment of pursuing higher education can assist those for whom college has not been a viable option through assistance measures designed to encourage greater and more affordable college attendance. 

And for those not choosing to pursue higher ed? The means of providing employment training, entrepreneurial support, and apprenticeship alternatives, along with other opportunity options, could be made more available. Full employment across all socioeconomic groups should always be our collective objective. 

Sharing prosperity across all segments of a pluralistic society is a great challenge. Perhaps we need to see more committed action from those who have succeeded, many of whom profess liberal leanings, to drive opportunity-for-all programs so that no one’s economic prospects are left behind. 

Educating for Impending Careers

Many of us in the United States were educated as children and young adults so that we could succeed both as citizens sustaining our democratic way of life and as productive workers able to sustain ourselves and our families economically. For the most part, the combination of public and private K–12 schools and higher education universities and colleges has served us quite well. We are by and large a well-educated and constructive populace. 

But can we rely on the old-school methodologies to sustain us for a world of work that will be characterized as mercurial and erratic calling for agility, adaptability, and rapid evolution? There is reason to think not. An economy that is experiencing increased speed and transformation will not be well served by an educational structure and model designed to prepare students for a relatively static and predictable work world. 

Let us examine the existing paradigm that traditionally and currently defines most American high schools and colleges. There are two patterns at play based on the concepts of liberal education and career-focused education. By the time a student reaches high school they select or have selected for them one of these persuasions or the other. 

Liberal (or liberal arts) education refers to an approach that encourages a broad and diverse exposure to fundamental and diverse subject matter with the goal being to educate a student for a complex world requiring a variety of perspectives, skills, and areas of knowledge. When and if college is reached, the student fits into this mix a concentrated focus in one or more disciplines. 

A career-focused or vocational path on the other hand focuses much more on preparing the student for a relevant job that is in demand in the workforce. Breadth gives way to depth in that a craft or skillset demonstrably employable is chosen, studied, and eventually mastered by the student. 

To be clear, I am not suggesting that there is anything fundamentally wrong with these models. My concern is in the traditional modes of delivery of the designs. We are still under the assumption that a high school diploma and/or college degree program that terminates upon graduation is enough to provide a student for a lifetime career. It used to be. However, projections are that it will not be enough going forward. 

The workplace and its career needs are becoming increasingly digitized and globalized, resulting in an urgency for malleable, resilient, and entrepreneurial workers to address the ever-vibrant economic demands across the planet. To maintain these attributes workers will need to accept and embrace continuous lifelong learning, upskilling, and training to keep up and stay ahead. Schooling will never end. In fact, it will become an integral and ongoing part of any advantageous job worth having for most people. 

We will likely see a time when liberal and career-focused methods become more of an as-needed hybrid with a greater proliferation of skill and knowledge-based certification and training programs not necessarily tied to slow moving traditional education settings. Students, employees, and educators will begin migrating more intentionally into online, virtual, and yes, brick & mortar learning facilities that offer the highest quality, data driven, short and long-term instruction essential to the requirements of the emerging economy. 

As an educator myself with 31 years in public schools and 5 years as a part time college adjunct I can say with some certainty that this industry will not on its own move in this direction without a lot of resistance. There are many entrenched interests compelled to resist such changes.  

A more responsive and pragmatic instructional delivery will likely arise from a combination of innovative educators and demanding students and employees requiring relevant reactive instruction. We can all begin by getting our heads around the concept of lifelong learning. I predict it will be far more energizing and efficient and much less stuck and draining. 

Preparing Your Career for a Binary Star Economy

Career Development is as fluid a field of study and method of personal improvement as can be found anywhere. Its progressive elasticity and growing erratic nature are due to the changing state of the world of work. In an environment that requires continual improvement, adaptability, and thorough planning as does ours, long-term career design can be a difficult and uncertain endeavor. 

As discovered by ancient mariners when navigating vast and strange oceans, it helps to have a North Star to serve as a beacon and guide. As we each seek to chart an unclear and enigmatic career development landscape for purposes of changing existing careers or determining new ones, we too can benefit from a North Star. However, Binary Star may be the more apt metaphor — a system consisting of two stars orbiting around their common center of mass. This is because the duality we must now regularly consider are the two interdependent powerhouses known as globalization and automation. 

The future of work appears to be heavily influenced, if not governed, by these two harbingers. In tandem, globalization and automation are in a process of modifying the way we live, and therefore how we work. The expanding utilization of technology combined with the spreading integration of people, businesses, and governments around the world is altering economic history in a way that has not happened since the Industrial Revolution. 

As paradigm shifting as the change from hand work to mass production was a hundred plus years ago, we are now witnessing a transformation just as groundbreaking, if not more. When people like Ray Kurzweil, the 67-year-old Director of Engineering at Google, predicts that by 2029 computers will be able to perform all tasks humans can now do, only better, then I pay attention — and you should too. 

It is not just the prognostications of one man that matter (and he has some doozies), but the unmistakable short and long-term trend lines indicating rapid proliferation in new and disruptive technologies and business models (think Airbnb, Uber, SaaS, MOOCS) and increased activity in what the International Monetary Fund refers to as the four basic aspects of globalization: international transactions; capital movements; migrations of people; and knowledge dissemination. 

Ask yourself, how well do your career plans hitch themselves to the forces of globalization and automation? It is wise to look for some connection. Enough current work is already being made redundant and new ways of organizing work tasks are in the process of being discovered. If I was as prescient as I wish I could be, I would now present a neat and tidy list of specific and guaranteed jobs of the future. But alas, I am not that farsighted. Nevertheless, here is what I think will help in preparing for the Brave New World and strengthen our decision making as we move forward. 

Paramount is the need to remain optimistic in the face of uncertainty. Pessimism and hand wringing will not fortify us against ambiguity. Those who will find success are those with a positive attitude allowing themselves to see and grasp an opportunity others do not or cannot. 

We also need to get back to having big ideas. The Hoover Dam, the Golden State Bridge, and the Empire State Building were all built during the Great Depression. Winning World War II, constructing the Interstate Highway System, and launching six crewed moon landings followed. Today we are all in a twist about whether to extend health insurance to the uninsured and whether to fund bridge repairs. Big problems exist that need substantial solutions. Let us find our lost courage to make grand proposals and realize lofty outcomes. 

Free thinking of the type that stimulates innovation and entrepreneurship also needs to be encouraged. This has always been America’s strong suit and it demands continuation, if not invigorating, in an ever-competitive global economy. Our schools for one can do a better job of transitioning from the mechanized industrial-aged model to one more consistent with a broad-minded enterprising ethos. 

Business dedicated to sharing, rather than old fashioned consumption and disposal of resources is becoming fashionable — and profitable. Making money by sharing homes, cars, locally grown foods, breweries, office spaces, etc. is becoming increasingly common. Disruptive of legacy business models to be sure, but isn’t that the way it is going these days? From an ecological viewpoint, an economy that utilizes resources in common with others may in part reverse the throw-away trend of the last half century. 

Reframing our attitudes and ways of thinking about the binary impact globalization and automation is having on our economy, careers, and ways of life may be the best approach we can profitably take away from this economic conversion.