Demonstrating value: How and to what extent public relations practitioners can better measure the impact of their work. Amber N. Daugherty 400366549 Capstone study November 24, 2023 MCM 740: Professional Project Department of Communications Studies and Media Arts Faculty of Humanities McMaster University Capstone Supervisor: Dr. Terry Flynn MEASURING THE IMPACT OF PR EFFORTS 2 Abstract This research aimed to create an understanding of how public relations (PR) practitioners in North America currently measure and evaluate their work and what gaps or barriers prevent them from using more strategic methods, as well as how industry measurement and evaluation (M&E) thought leaders and experts recommend practitioners engage in meaningful M&E initiatives. Research methods included a survey of Canadian PR practitioners and interviews with North American and European C-suite communicators and communications experts and thought leaders. While survey results indicated some use of measurement and evaluation practices, there were significant inconsistencies and indications of lack of knowledge of how to use research to understand audiences, conduct environmental scanning, set SMART objectives and measure communications impact in a way that demonstrates alignment with the organization’s goals. Using the learnings from this research, the researcher created a guide for public relations practitioners not currently measuring and evaluating their work to begin a measurement and evaluation program to support more strategic, effective work. Key words: measurement, evaluation, two-way symmetrical communication, outcomes, impact, research, data MEASURING THE IMPACT OF PR EFFORTS 3 Table of contents List of tables 4 Introduction 7 Literature review 8 Research problem 30 Research questions 30 Methodology 32 Results 39 Discussion 57 Limitations 74 Conclusion and future research opportunities 75 References 79 Appendices Appendix A: Communications Measurement and Evaluation (M&E) Guide 87 Appendix B: McMaster University Research Ethics Board Certificate of Ethics Clearance 94 Appendix C: Survey Letter of Information/Consent 95 Appendix D: Survey Questions 98 Appendix E: Interview Letter of Information/Consent 109 Appendix F: Interview Questions 112 MEASURING THE IMPACT OF PR EFFORTS 4 List of tables Table 1 Survey question: What is your title? 33 Table 2 Survey question: How many years have you worked in communications/public relations? 34 Table 3 Survey question: Do you work in the private sector, public sector or not-for- profit sector? 34 Table 4 Survey question: What sector [industry] do you work in? 34 Table 5 Survey question: How big is your immediate team/approximately how many people in total work in communications across your organization? 35 Table 6 Survey question: What is your communication team’s annual measurement and evaluation budget? 35 Table 7 Interviewed communications experts’ locations 37 Table 8 Interviewed C-suite and non-C-suite communicators’ locations and sectors 37 Table 9 Survey question: What type of information do you gather about your target audiences to inform your communications/public relations efforts? Select all that apply. 40 Table 10 Survey question: Where do you get information about your target audiences? Select all that apply. 40 Table 11 Interview question: What is your current practice for understand your audiences including their demographics, psychographics, media consumption, etc. as well as their needs, desires and other relevant information? 411 Table 12 Interview question: What is your top tip for finding the needs, desires and other information about audiences relevant to effective public relations? 42 MEASURING THE IMPACT OF PR EFFORTS 5 Table 13 Correlation between answers to the questions Do you have processes established to regularly listen to your target audiences/stakeholders? and Do you work in the private sector, the public sector or the not-for-profit sector? 42 Table 14 Survey question: How do you listen to your target audiences/stakeholders? Select all that apply. 43 Table 15 Survey question: When you run a communications campaign, how do you confirm whether your efforts reached your target audience? Select all that apply. 43 Table 16 Interview question: What is your current approach to environmental scanning? 44 Table 17 Survey question: Is there an expectation in your organization for measurable communications objectives? 45 Table 18 Correlation between answers to the questions Is there an expectation in your organization for measurable communications objectives? and Do you work in the private sector, the public sector or the not-for-profit sector? 46 Table 19 Survey question: Thinking about recent communications plans, how often were your objectives measurable (i.e., tied to specific numbers – increase social impressions from X to X or by X%, etc.)? 46 Table 20 Correlation between answers to the questions Thinking about recent communications plans, how often were your objectives measurable? and Do you work in the private sector, the public sector or the not-for-profit sector? 46 Table 21 Correlation between answers to the questions Is there an expectation in your organization for measurable communications objectives? and Thinking about recent communications plans, how often were your objectives measurable? 47 Table 22 Survey question: Thinking about recent communications plans, what were your objectives focused on? Select all that apply. 48 MEASURING THE IMPACT OF PR EFFORTS 6 Table 23 Interview question: What is your current approach to setting SMART objectives that can be measured? 49 Table 24 Correlation between answers to the questions Do you measure and evaluate your communications efforts? and Is there an expectation in your organization for measurable communications objectives? 50 Table 25 Correlation between answers to the questions Do you measure and evaluate your communications efforts? and Do you work in the private sector, the public sector or the not-for-profit sector? 50 Table 26 Survey question: What impact does measurement have on your work? Select all that apply. 51 Table 27 Survey question: Why [do you] not [measure]? Select all that apply. 51 Table 28 Survey question: Based on the use of measurable objectives, how effective do you think your communications efforts are? 53 Table 29 Survey question: Please explain/expand on your answer to the previous question. 54 Table 30 Correlation between answers to the questions Do you measure and evaluate your communications efforts? and Based on the use of measurable objectives, how effective do you think your communications efforts are? 54 Table 31 Interview question: How can practitioners not currently measuring or evaluating their efforts start – specifically in low- or no-cost ways? 55 Table 32 Interview question: Where do you see the future of M&E? 56 Table 33 Survey question: What kind of resources/support would help you implement better measurement and evaluation practices? Select all that apply. 57 MEASURING THE IMPACT OF PR EFFORTS 7 Introduction In theory, public relations (PR) practitioners contribute to organizational, societal and stakeholder change (Macnamara, 2018a): they can help increase sales and donations, support the improvement of public health through increased screening rates and uptake of vaccines, help an organization’s reputation improve, to name just a few. Yet proving their work has had an impact requires specific knowledge, expertise, time and resources to measure and evaluate associated metrics before, during and after campaigns. While it may sound simple, measurement and evaluation (M&E) remains one of the top workplace issues for public relations practitioners globally (Cacciatore & Meng, 2022). Too often, PR practitioners focus on vanity metrics – how many people saw a post, how many articles the organization was quoted in – which provide no context about what audiences did with the messaging: did they take an action or have a mindset/behaviour change (Macnamara, 2023)? This presents an issue for countless reasons, including that it prevents practitioners from truly understanding the impact of their efforts and being able to nimbly pivot or modify their strategy if it is not working to achieve their objectives. A solid M&E program can help PR practitioners be more efficient, effective and enhance their value in the eyes of their organization’s executive team, which can lead to practitioners’ involvement in the dominant coalition and strategic decision making and therefore increased status and influence (Grunig, 2013). Critically, using data can also move PR practitioners away from relying on instinct or subjective experience and improve their business credibility (Bradbury, 2023; Paine, 2007). This research aimed to create an understanding of how PR practitioners in North America currently measure and evaluate their efforts and what gaps or barriers prevent them from using more strategic methods, as well as how industry M&E thought leaders and experts recommend practitioners engage in meaningful M&E. As part of this research, the researcher created a guide (see Appendix A) to support organizational communications’ functions in establishing M&E programs, with a specific focus on smaller organizations lacking budgets for outsourcing M&E to a third party or paying for sophisticated M&E tools. MEASURING THE IMPACT OF PR EFFORTS 8 Literature review Excellent PR/excellence (effectiveness) The excellence theory was born out of a 15-year study about best practices in communications management that was tested through survey research and qualitative interviews with PR leads, practitioners and CEOs (Grunig, 2013). The theory suggests that PR provides the most value to its organization and the broader society when it helps the organization solve problems and achieve goals for both the organization and its stakeholders, understood through dialogue with stakeholders that leads to high-quality relationships (Grunig, 2013). To be able to do this, Grunig (2013) wrote, “Public relations must be organized in a way that makes it possible to identify strategic publics as part of the strategic management process and to build quality long-term relationships with them through symmetrical communication programs” (p. 9). While the researchers admitted that calculating an exact return on investment for communications can be challenging, Grunig (2013) highlighted that strong, long-term relationships between the organization and its stakeholders “reduced the costs of litigation, regulation, legislation, and negative publicity caused by poor relationships; reduced the risk of making decisions that affect different stakeholders; or increased revenue by providing products and service needed by stakeholders” (p. 9). The excellence study specifically called out six key areas that helped determine or lead to excellent public relations: PR is involved in strategic management; PR is integrated in a single department as opposed to sublimated to marketing or other management functions; internal communication is also symmetrical to help increase employee satisfaction; organizations with excellent PR functions value women as much as men and have programs to empower them; diversity, race and ethnicity are considered; and PR provides an ethical conscience in the organization (Grunig, 2013). Understanding the value of public relations is crucial to begin to identify areas where PR practitioners can measure the impact of their efforts – relationship strength, successfully identifying and mitigating potential issues and increasing revenue by helping the organization provide products and MEASURING THE IMPACT OF PR EFFORTS 9 services stakeholders want are examples of PR using its position to support both the organization and its stakeholders. Stakeholders/audiences/publics Despite being used interchangeably, there are significant differences between the terms publics, audiences and stakeholders. Wakefield and Knighton (2019) explain that stakeholders are connected to organizations, audiences are connected to messages and publics are connected to issues. Stakeholders A stakeholder is “any group or individual who is affected by or can affect the achievement of an organization’s objectives” (Freeman, 1984, p.1 as cited in Wakefield & Knighton, 2019, p. 2). Stakeholders are connected to an organization through different forms of linkages: enabling (they enable the organization to run through funding, governance, etc.), functional (they help the organization function through providing supplies, labour, consuming products or services, etc.), diffused (they do not frequently engage with the organization but may in an event like a crisis) and normative (they have similar interests, such as competitors) (Rawlins, 2006). The most common stakeholders for an organization include shareholders, employees, customers, suppliers and communities (Rawlins, 2006). It is suggested that those with functional and enabling linkages should be considered highest priority because of the organization’s dependence on them (Rawlins, 2006). Stakeholder typology suggests stakeholders can be grouped based on their levels of power (ability to influence others to make decisions they would not have otherwise made), legitimacy (having a “legal, moral, or presumed claim that can influence the organization’s behavior, direction, process or outcome” (Rawlins, 2006, p. 5)) and urgency (having a time-sensitive claim or relationship that is important to the stakeholder) (Rawlins, 2006). The various combinations of these factors can create a prioritization strategy: definitive stakeholders have power, legitimacy and urgency, expectant stakeholders have two of the three and latent have none (Rawlins, 2006). Prioritizing based on these factors must be frequently done as stakeholders may shift which attributes the hold at any given time depending on the situation; as an example, Rawlins (2006) suggests that a dangerous stakeholder (power and urgency but no legitimacy) MEASURING THE IMPACT OF PR EFFORTS 10 can become a definitive stakeholder if it gets legitimacy as has happened with nongovernmental organizations. Audiences Where stakeholders are connected to an organization, audiences are connected via their individual link to a message or event – for example, sitting in a movie theatre watching the same movie (Wakefield & Knighton, 2019). When organizations send out messages, they are typically sending them to audiences, looking for people to act on the messages as individuals (i.e., buy a product, change their behaviour, attend an event) rather than asking for a group to organize and work together on something (Wakefield & Knighton, 2019). However, those audiences can become (unanticipated) publics if they identify an issue and act together on (Wakefield & Knighton, 2019). Publics Publics are groups of individuals who organize to act, often because they are opposed to something (Wakefield & Knighton, 2019). They are connected to the issue, not necessarily an organization – but could be connected to an organization if, say, the organization had announced it was doing something and a group advocated against it (Wakefield & Knighton, 2019). For example, a recent public was formed in Minden, Ontario; the hospital announced it was closing its emergency department which led a group to organize to advocate against the closure, hosting town halls and gathering petition signatures (Davis, 2023). In this case, the hospital was interacting directly with this group, rather than just with individuals. Because publics often target organizations with their actions, they are frequently viewed by organizations as negative (Wakefield & Knighton, 2019). Valentini et al. (2012) highlighted that organizations do not create or control publics; they can only cultivate relationships with publics; “publics form chaotically and create communities and tribes outside of the influence of organizations” (p. 876). For this reason, the authors suggest that organizations must realize they are part of a broader society to which they have a responsibility – they cannot be focused solely on their own self-interests. Valentini et al. (2012) highlighted organizations’ roles as helping find common interest between organizations and publics to contribute “to the restoration and MEASURING THE IMPACT OF PR EFFORTS 11 maintenance of a sense of community” (p. 875). Grunig (2013) likewise suggested that “public relations can help to ‘manage’ reputation by cultivating relationships with publics and encouraging management to make socially responsible decisions” (p. 16). Grunig (1997) suggested that “Organizations need public relations because their behaviours create problems that create publics, which may evolve into activist groups that create issues and threaten the autonomy of organizations” (p. 9). In our digimodern society connected via the internet and social media, publics can be more far- reaching and complex than in previous years – they are no longer limited by qualities like geography, religion, class, culture, etc. (Valentini et al., 2012). Climate change is an example of an issue that has created global publics committed to addressing the issue (Valentini et al., 2012). Connected to this, organizations should consider their unanticipated publics: those groups who form once a message is received and they are triggered by it, positively or negatively, in some way. The term unanticipated publics is meant to replace one that is frequently used by PR practitioners – general public – to describe when a message is sent out broadly and unintended recipients may see or act on it. Wakefield & Knighton (2019) suggest that the term general public is meaningless because, for a general public to exist for a given message: everyone in a given society would need to (1) have an equal opportunity of receiving and acting upon the message, and (2) care about the message enough to feel a need to respond or act on it. If such a public indeed existed, there would be no need for targeting any specific group. (p. 4) Wakefield & Knighton (2019) suggest that organizations who make predictions about how unanticipated publics will respond to their messages “will have a better chance of being proactive in their relationship building with their audiences and stakeholders and with the various publics that are affected by or can affect the organizations’ behaviors and direction” (p. 4). The situational theory of publics. PR practitioners can use the situational theory of publics to help strategically segment publics, understanding publics’ relationship to an issue and likelihood to act on it (Grunig, 1997). The theory looks at three independent variables – problem recognition, level of involvement and constraint recognition – that influence how likely an individual or public is to engage in MEASURING THE IMPACT OF PR EFFORTS 12 finding information about an issue either actively or passively (Kim & Grunig, 2011). At the top end, “a person who perceives a problem, a connection to it, and few obstacles to doing something about it is likely to seek and attend to information about the problem” (Kim & Grunig, 2011, p. 121). By understanding which publics are active (compared to latent or non-aware), PR practitioners can use their resources more efficiently to target those specific groups, instead of doing a campaign to the mass public (Kim & Grunig, 2011). It is also important to note that there are four types of publics associated with this theory based on the level of engagement in issues: all-issue, apathetic, single-issue and hot-issue publics (Grunig, 1997). The situational theory of problem solving. An evolution of the situational theory of publics, the situational theory of problem solving added the idea that publics participate in communicative action in problem solving (CAPS) which includes active behaviours such as sharing information about an issue with others, bringing the number of independent variables to four: perception, cognition, motivation and communicative behaviours (Kim & Grunig, 2011). Notably for PR practitioners, these variables can be measured: as Kim & Grunig (2011) highlight, traditional campaigns often seek to “increase knowledge, favorable attitude, and desired behaviors of… target publics” (p. 132). But this is difficult to do, for example, in a health campaign that is one-way and based in persuasion. Measuring individuals’ awareness of the issue, perceived constraints and willingness to share information about it may provide richer data about the campaign’s impact (Kim & Grunig, 2011). Kim & Grunig (2011) suggest that the situational theory of problem solving can also help set better objectives – for example in campaigns about health risks “to increase the sense of seriousness and connection to the health risks (increasing problem recognition) and lift the barriers for members of publics to do something about the problem (decreasing constraint recognition)” (p. 132). They go on to suggest that “the effectiveness of organizational policy and communication programs can be tracked by the changes in levels of problem and involvement recognition and constraint recognition” (p. 135). The impact of PR should be positive for both organizations and the people it interacts with – stakeholders, audiences and publics – and understanding the difference between the three helps MEASURING THE IMPACT OF PR EFFORTS 13 practitioners better determine both who they are trying to reach with their efforts and how they can identify, segment and prioritize those groups. This is critical as PR practitioners set their objectives, evaluate the extent they achieved them and gather feedback to support better future efforts. In this study, the researcher used the theoretical assumption that many PR practitioners use the terms stakeholders, audiences and publics interchangeably. For this reason, the researcher chose to only use the term audiences in data collection to avoid confusion. Two-way symmetrical communication How PR practitioners communicate with an organization’s stakeholders, audiences and publics has long been a topic of interest, concern and criticism in PR practice and academia (Grunig, 2001). The ideal method is two-way symmetrical communication where practitioners “use research and dialogue to bring about symbiotic changes in the ideas, attitudes, and behaviors of both their organizations and publics” (Grunig, 2001, p. 12). This is different than a press agentry/publicity model where practitioners solely broadcast information, public information model where in-house journalist-style practitioners write and then share information and two-way asymmetrical model where practitioners conduct research that informs their actions to have the best chance of persuading stakeholders, audiences and publics (Grunig, 2001). The mixed-motive model of PR, a combination of two-way symmetrical and two-way asymmetrical, is often mentioned as better than the two-way symmetrical model however, Grunig (2001) outlines that its description, where “organizations try to satisfy their own interests while simultaneously trying to help publics satisfy their interests” (p. 12) is what was originally intended by his two-way symmetrical model. He says that “persuasion is still a relevant concept in the symmetrical model. The difference is that the public relations professional sometimes must persuade management and at other times must persuade a public” (p. 13). Cultivating high-quality relationships between an organization and its stakeholders, audiences and publics has been highlighted as a defining feature of PR that separates it from other functions like marketing and advertising (Smith, 2012). For these relationships to be successful however, dialogue must occur, and that has been poorly understood and practiced by many PR practitioners because true dialogue MEASURING THE IMPACT OF PR EFFORTS 14 requires the relinquishing of control over the outcome (Theunissen & Wan Noordin, 2012). Theunissen and Wan Noordin (2012) argue that if practitioners engage with a group where they are anticipating what they would like the result to be, they are not participating in authentic dialogue. In fact, Theunissen and Wan Noordin (2012) suggest that “when the motives for engaging in dialogue are about persuading its stakeholders, risk to and vulnerability of the stakeholders increases, raising ethical concerns” (p. 7). Research has found that symmetrical communication and dialogue have been used by some organizations to gather information to identify how to make small concessions that allow them to “maintain social order and to preserve their own hegemony” (Grunig, 2013, p. 18). Coorientation, referred to as the parent of the symmetrical model, suggests that communication should be used to adjust ideas and behaviours to others “rather than to try to control how others think and behave” (Grunig, 2013, p. 6). This is risky though – critics have suggested that PR supports business goals: to make profit by increasing revenues through publicity, to improve operation efficiency through employee communication, to get as much freedom as possible through influencing public opinion so they can operate their business without (governmental) constraints, to increase competitiveness through managing issues (and we could add crisis management as part of this), and to promote corporate values and acceptance of these values. (Theunissen & Wan Noordin, 2012, p. 7) Theunissen and Wan Noordin (2012) argue that dialogue is not required for any of these, but publicity and persuasion are. When true dialogue requires a willingness to cede an organization’s carefully crafted image to show up authentically, the possibility that the result will be unpredictable and may in fact lead to further disagreement, it is not surprising that PR practitioners are hesitant to engage in dialogue when the risks are so high. Further to this, the idea of dialogue requires both talking and listening and yet, a wide-reaching study found that listening is not employed by many or even most PR practitioners (Macnamara, 2016). Rather, it found a higher likelihood of organizational speaking or what the author described as SOS: sending out stuff (Macnamara, 2016). This goes against the basic principles of two-way symmetrical communication which “sets the stage for mutual influence. You cannot be influenced by a group if you MEASURING THE IMPACT OF PR EFFORTS 15 never hear it” (Coombs & Holladay, 2007, p. 46, cited in Macnamara, 2016, p. 163). Organizational listening and dialogue/engagement are not just a nice to have, they can “result in increased employee retention and productivity, increased customer loyalty, improved customer service, reduced industrial disputation, and reduced crises and conflicts affecting organizations” (Macnamara, 2016, p. 164). Macnamara made the reason this is important tangible, saying that PR’s job is to inform and persuade people – which will happen more effectively if practitioners understand who their stakeholders, audiences and publics trust and where they get information (Sydney Lectures, 2022). He highlighted the importance of this during the COVID pandemic – knowledge gathered through organizational listening helped craft strategies such as the United Kingdom’s government engaging with local physicians to get them to deliver the message about the importance of getting vaccinated (Sydney Lectures, 2022). Macnamara (2016) said effective ethical listening requires a culture opening to listening, listening policies, open/interactive systems to get feedback, monitoring/technology tools to support listening, staff to do the listening, skills to listen effectively and a process for learnings to flow back through the organization to impact policy and decision making. If a key outcome of PR practitioners’ work is to build and strengthen relationships with key stakeholders, audiences and publics, it is critical they understand and effectively use recommended methods of doing so including dialogue and organizational listening. Environmental scanning and benchmarking Information gathering is a critical part of PR; practitioners must understand what is happening outside of their organization to identify potential issues and opportunities, who they need to be talking to (stakeholders/audiences/publics) and what their needs are, what their competitors are doing and more to guide their efforts. This work can and should happen through both environmental scanning and benchmarking. Environmental scanning The excellence theory includes environmental scanning as a practice of the top PR functions: “The most excellent departments participated fully in strategic management by scanning the social, MEASURING THE IMPACT OF PR EFFORTS 16 political, and institutional environment of the organization to bring an outside perspective to strategic decision-making” (Grunig, 2013, p. 12). Dozier (1986) found that there was a correlation between practitioners conducting environmental scanning and having a strategic (manager) role because “managers…are expected to solve problems between the organization and publics. Such problem solving requires an understanding of ‘what’s going on out there’” (p. 17). While there is no one accepted way to conduct environmental scanning, Dozier (1986) suggested there are buckets different methods can fit into: scientific (i.e., formal studies and surveys, etc.) and informal (i.e., phone calls, in-depth interviews with members of the organization’s publics, identifying trends in media, holding work-group meetings with staff, etc.). Research has shown that personal contacts such as customers, journalists, supervisors and employees can be better sources of environmental information than impersonal contacts such as media and public opinion polls (Grunig, 2013). Grunig (2013) pointed to work highlighting that an ideal environmental scanning process should include: monitoring strategic decisions of management to identify consequences on publics, monitoring web sites and other sources of information from activists, using the situational theory to segment publics, developing a database to analyze information, and monitoring media and other sources to track the process of issues management. (p. 12) The field of strategic management includes environmental scanning as a prerequisite to creating a strategy; before moving ahead with a plan, it is necessary to understand the space one will be operating in, including “economics, drivers of profitability, and… key success factors to understand what it takes to win in an industry” (Crossan et al., 2016, p. 62). For this reason, there are numerous models to understand current environment including the SWOT analysis, PESTLE, Porter’s Five Forces, blue ocean strategy and so on. These types of models could be used by PR practitioners to identify issues, threats and opportunities. Benchmarking If environmental scanning is focused externally away from PR looking at the organization and broader society in which it operates, benchmarking is more internally focused on the PR practice and how MEASURING THE IMPACT OF PR EFFORTS 17 it compares to its competitors and leaders in the profession. It is meant to be an ongoing approach to understand how the PR function compares to others to help improve its own performance (Fleisher & Burton, 1995). Benchmarking can lead to better communications, a more scientific rather than intuitive approach to efforts, actionable insights on areas to improve efficiency and effectiveness and an increased ability for the PR function to adapt or change, better plan and evaluate resource allocation and improve decision making (Fleisher & Burton, 1995). Fleisher and Burton (1995) quoted a communicator who said: benchmarking can’t ensure that I’ll always make good decisions, but it can ensure that my decisions will be made on systematically derived evidence… [which] gives me a much better chance of convincing my boss of the appropriateness of our strategies. (p. 5) When looking at ways to create impact, environmental scanning and benchmarking can help practitioners better understand the environment they are operating in and lessons learned from their competitors, peers and leaders in PR to improve their own efficiency and efficacy. Measurement and evaluation (M&E) PR M&E helps determine the effectiveness and/or value of efforts; short-term, identifying whether and how PR strategies, tactics, etc. supported the success or failure of PR objectives; and long- term identifying a PR program’s success or failure at supporting the strengthening of relationships between the organization and its key publics and achieving organizational goals (Lindenmann, 2003). PR measurement is a way of giving a result a precise dimension, generally by comparison to some standard or baseline and usually is done in a quantifiable or numerical manner… [and] PR evaluation determines the value or importance of a PR program or effort, usually through appraisal or comparison with a predetermined set of organization goals and objectives. (Lindenmann, 2003, p. 2) Finding an effective way to measure and evaluate communications efforts has been said to be the holy grail in the field and has led many practitioners to attempt to identify a silver bullet or one singular measure that proves the field’s value (Buhmann et al., 2019; Likely & Watson, 2013). The Excellence Study found that PR can add value on five levels: individual messages or publications; programs or MEASURING THE IMPACT OF PR EFFORTS 18 campaigns; functions or departments; organizational or enterprise; and societal levels (Likely & Watson, 2013). The study also found that most research and work has been done on evaluating the first two levels, despite their lack of ability to show PR’s true value to an organization including the strength of relationships the organization has with its stakeholders and publics and the impact the organization’s work has on society (Likely & Watson, 2013). Grunig (2008) wrote that “Effective organizations are able to achieve their goals because they choose goals that are valued by their strategic constituencies both inside and outside the organization and also because they successfully manage programs to achieve those goals” (p. 96) and that ineffective organizations have difficulty achieving goals because their publics do not support those goals. He continued: Public relations makes an organization more effective, therefore, when it identifies the most strategic publics as part of strategic management processes and conducts communication programs to develop effective long-term relationships with those publics. As a result, we should be able to determine the value of public relations by measuring the quality of relationships with strategic publics. (Grunig, 2008, p. 97) Proving its value is not a nice-to-have for a PR function; it can impact its ability to provide strategic support in the organization. The Excellence Study found that excellent PR should have a seat in the dominant coalition and support strategic decision making, yet “public relations could not have a role in strategic management unless its practitioners had a way to measure its effectiveness” (Grunig, 2013, p. 7). To assist as a strategic function, PR needs to evaluate whether objectives were met and how they were (or were not) met to allow for future improvements (Buhmann & Likely, 2018). As Buhmann highlighted, “strategic communication needs to be evaluated not just in terms of the intended effect of a message, product or campaign but also in terms of its broader strategic and operational contributions for the whole organization” (p. 13). This helps enhance PR’s value to executives; Macnamara suggested that communications is often seen as a cost centre, spending the organization’s money, until the point at which the function can demonstrate outcomes and impact when it becomes a value-adding centre (European Committee of the Regions, 2016). MEASURING THE IMPACT OF PR EFFORTS 19 Where PR fits in to an organization can also impact what it measures; research has found that when PR reports directly to the C-suite, it is more likely to be strategic, measuring factors such as crisis avoidance, reputation, employee attitudes and stakeholder opinions. When PR reports to marketing, it is more likely to focus on evaluating its contribution to sales and media coverage (International Association of Business Communicators, 2007). Current state A recent global survey found that measurement was the third most important issue facing communications leaders after increasing volume flow and the digital revolution (Cacciatore & Meng, 2022). There are countless reasons why M&E are not done regularly by practitioners including lack of budget, time, data, knowledge (including academic preparation), tools, industry standards and support or understanding from senior executives (Arenstein, 2021; Buhmann et al., 2018). Grunig suggested another one as fear that evaluation would show practitioners’ efforts are not working, (International Association of Business Communicators, 2007). Some also have negative associations with being evaluated. When this became an issue during work he was doing for the World Health Organization, Macnamara changed the framing: instead of referring to an M&E program, he changed it to MEL – measurement, evaluation and learning to emphasize it was not about criticizing but rather improving communications efforts (Amecorg, 2023). Macnamara (2007) suggested that most practitioners do not use research to measure because they do not see it as relevant: “When one focuses on and sees one’s job as producing outputs such as publicity, publications and events, measurement of effects that those outputs might or might not cause is an inconsequential downstream issue – it’s someone else’s concern” (pp. 6-7). Macnamara raised the idea of functional stupidity – “illogical outcomes at a function level – not individual activity” (Macnamara, 2018b, para. 7) suggesting that some practitioners overpromise to win pitches or impress management: In such situations, rigorous evaluation is not in the interest of practitioners in PR and communication management. It is actually in their interest to avoid rigorous evaluation or to use MEASURING THE IMPACT OF PR EFFORTS 20 simplistic methods that show activities and outputs rather than outcomes or impact. (Macnamara, 2018b, para. 8) Macnamara (2007) suggested that PR practitioners have “evolved to be predominantly intuitive, author- centric and concerned primarily with producing outputs, whereas communication scholars, researchers and social scientists take an approach that is scientific, audience-centric and concerned with outcomes” (p. 8). Often, even when PR practitioners do measure, they only look at/prioritize vanity metrics such as media pickup or social media impressions, which provide no insight on whether message receivers retained or believed the information or did anything differently as a result (Macnamara, 2023). Two studies across more than 60 countries showed that press clippings, media analysis and web tracking were the most popular M&E methods for PR practitioners while research on attitude or behaviour change or more sophisticated methods were rarely used (Buhmann & Volk, 2021). Part of this issue is a lack of knowledge; research in 2021 found that 40% of PR practitioners lacked data competency (Meng et al., 2021). A lack of demonstrating value is mimicked in research: a systematic review of PR evaluation and measurement found that there was no guiding theory or “coherent theoretical body of knowledge within evaluation and measurement research” (Volk, 2016, p. 969) and that, while how to measure relationships and reputation have been the highest researched topics, the lowest have been assessing the overall value of PR and measuring other intangible values. Despite all this, PR practitioners acknowledge the benefit of focusing on outcomes and impact: a 2023 survey of PR professionals identified producing measurable results and tying PR activity to business impact as the top two ways of increasing the value of PR among internal stakeholders (Muck Rack, 2023). A 2023 study on the future of corporate communications found that, while chief communications officers were more respected and relied on by their CEO than in previous year, they were “still struggling to receive the consistent support needed to evolve… in part, [due] to the ongoing difficulty of directly linking communications activities to business outcomes” (Edelman, 2023, p. 8). MEASURING THE IMPACT OF PR EFFORTS 21 Principles and frameworks While there is no one accepted way to measure and evaluate communications efforts, several frameworks and principles provide a place to begin. The integrated evaluation framework from the International Association for Measurement and Evaluation of Communications (AMEC) is an online tool that provides a step-by-step guide for PR practitioners to think about M&E in their campaigns (AMEC, n.d.-a). Based on a program logic model, the framework begins with organizational and communication objectives and ends with organizational impact (i.e., did the campaign lead to change in reputation, relationships, reaching targets, increased staff loyalty and retention, organizational change, social change, etc.) (AMEC, n.d.-c). For each step, the framework provides a definition as well as examples. Critically, it differentiates between key terms that are often confused or used incorrectly by PR practitioners: outputs (content, materials, activities to get the message out), outtakes (target audiences’ initial responses/reactions), outcomes (measurable effects on the audience such as knowledge, trust, preference, intention, attitude or behaviour change, advocacy) and impact (AMEC, n.d.-c). The distinction between tactics the communicator used (outputs) and the effects it had on the audience (outtakes, outcomes, impact) is key as, for example, PR practitioners often highlight media uptake (an output) as a signal their campaign was successful, despite the nuances associated with media – such as lack of trust in it, conflicting messages, lack of recall – which suggest just because an article was published, it did not have the desired effect (Pawinska, 2023). Another frequently referenced resource in this area includes the Barcelona Principles (BPs), a list of seven statements that outline key ideas about PR M&E, such as “setting measurable goals is an absolute prerequisite to communication planning, measurement, and evaluation” and “holistic communication measurement and evaluation includes all relevant online and offline channels” (AMEC, n.d.-b). However, some criticism has suggested that the BPs are too general and, despite their continued modification over the years, have not included an education plan, so rather than being adopted by practitioners, they are more like soft guidelines for consideration (Buhmann et al., 2019; Michaelson & Stacks, 2011). In fact, that is one of the biggest criticisms of most of the frameworks that have been MEASURING THE IMPACT OF PR EFFORTS 22 created for PR: inconsistent adoption and use. Some have suggested this may be due to a belief by practitioners that PR is just different and so cannot have one standard, or that consultants are using their own proprietary methods which cannot be independently replicated and tested (Michaelson & Stacks, 2011). A lack of universally adopted standards is a miss because it “leads to a lack of compatibility with organization-wide evaluation procedures and hampers the comparability of measurement and evaluation of strategic communication between organizations” (Buhmann & Volk, 2021). Standards help practitioners understand how they are performing compared to their peers within and outside of their organization to improve their efficacy; using what is known to work can help “save us time, effort, and allow us to deliver better research, measurement, and evaluation” (Geddes, 2011). Some warn that if PR practitioners do not have their own standards, clients will impose theirs on them (Institute for Public Relations, 2012). Michaelson and Stacks (2011) highlighted that even measures that seem simple to measure need to be standardized because the way people go about measuring can be quite different; they referenced one study which had five evaluation agencies look at the same briefing document featuring 138 media stories. The agencies came back with completely different analyses; when asked to identify sentiment analysis, for example, results ranged from 17 to 100 per cent positive (Michaelson & Stacks, 2011). Thus, it is not enough to measure but to measure in such a way that results will be valid, reliable and replicable: “standardization of public relations measures requires significantly more than a description of the measure to be included in the analysis, but the implementation of specific research procedures and protocols that will be applied uniformly and consistently” (Michaelson & Stacks, 2011, p. 9). Controversy When evaluation is done, it is often criticized for being done poorly: using substitution fallacies, referencing proven ineffective methods such as ad value equivalency (AVEs), trying to fit PR into the financial return on investment (ROI) language or even making up metrics (Buhmann et al., 2019; Macnamara, 2023; Paine, 2023). MEASURING THE IMPACT OF PR EFFORTS 23 Substitution fallacies happen when practitioners substitute a metric at one level for another; an example is suggesting that the potential media reach of a story demonstrates organizational impact such as improved reputation (Macnamara, 2023). Being quoted in media can be important to track if it is aligned with the campaign’s objectives but simply being quoted in the media does not indicate any outcome-based change such as knowledge/attitude/trust/preference/etc. (Macnamara, 2023). AVEs, which suggest that an appearance in a media story can be attributed a dollar value by equivalating how much money it would cost to pay for a similar ad placement, have long been a source of contention in the industry. In fact, one of AMEC’s Barcelona Principles is the clear direction that “AVEs are not the value of communication” (AMEC, n.d.-b). In a say no to AVEs campaign, AMEC listed 22 reasons they were an ineffective measurement, including that AVEs do not account for target audiences, coverage quality or sentiment (i.e., positive versus negative) and that AVEs are a vanity metric that may look impressive but are ultimately meaningless (Bagnall, n.d.). A 2020 report found that almost half of industry leaders were still using AVEs globally almost ten years after AMEC’s campaign, though its use had decreased in North America (Holmes, 2019). Macnamara (2023) tied this ongoing use of AVEs to the idea that measurement has long been media-centric, rather than audience-centric, a trend he suggests needs to change given the vastly different landscape PR practitioners operate in now with social media providing a direct connection to audiences. Connected to the idea of measuring the financial value of PR is the controversial use of the business term return on investment (ROI). One of the largest concerns with the use of ROI is that PR activities are difficult to measure in financial terms because they happen alongside other communications and business efforts and often focus on numerous non-financial outcomes such as strength of relationships (Likely & Watson, 2013). While Grunig has suggested the financial returns from strong relationships can be measured (i.e., in avoided costs, risks and increased revenue), estimating the associated financial costs with maintaining the relationship can be challenging because “they are long-term, lumpy, and often keep things from happening” (Watson et al., 2011). Indeed, in a true dialogue, both sides may disagree – how does one measure the ROI when it is not positive (Theunissen & Wan Noordin, 2012)? In Watson et al. MEASURING THE IMPACT OF PR EFFORTS 24 (2011), Grunig suggested “we should measure relationships but explain their value conceptually to understand (but not measure) the ROI of public relations” (para. 10). Other experts suggest PR practitioners use the language of business such as return on investment only for short-term campaigns where a direct link can be made between a campaign and sales or reputation, as opposed to longer term strategies for example increasing awareness and understanding (i.e., health promotion programs, supporting the creating of a policy, etc.) (Watson & Zerfass, 2012). PR practitioners may also use proxies to demonstrate movement towards business goals; for example, a 2022 study found that PR (via positive news) is more effective than marketing in driving lower marketing funnel metrics such as purchase intent (Dwyer et al., 2022). Notably, experts provide reminders that in any case, it is important to use the term correlation instead of causation when referring to PR efforts as multiple external factors may influence an audience’s response (unless conducting a scientific, randomized control trial) (Bruce, 2018; Lindenmann, 2003). Finally, Paine (2023) highlighted the “top 10 PR measurement atrocities” in a blog post earlier this year. These included making up metrics in a variety of ways: giving banner ads a dollar value, cutting out data from various sources to make results seem more impressive and adding multipliers to certain media outlets or types of stories. Ironically, while many in PR look for a single evaluation method to prove their impact (often related to media via AVEs or ROI), business has introduced techniques like the balanced scorecard and triple-bottom-line reporting which highlight that numerous factors need to be considered to demonstrate value (Gregory & Watson, 2008). Best practices One common misconception is that M&E only happen at the end of a PR campaign. While summative evaluation is important, experts highlight that formative research – before a campaign begins – is critical as it allows PR practitioners to set specific, measurable, achievable, relevant and timebound (SMART) objectives more effectively with information about their audiences in mind (IPR Measurement Commission, 2022; Matsuitz, 2022). Matusitz (2022) suggested that formative research “is often hailed as MEASURING THE IMPACT OF PR EFFORTS 25 the most important factor in the design and implementation of a successful campaign” as it helps “identify the stance, knowledge, and needs of the target group vis-à-vis a specific issue” (ch. 10, formative research section). Engaging in some form of M&E before a campaign is also critical to establish a baseline; as one measurement expert put it, “How can you measure if you don’t know where you’re starting?” (Bradbury, 2023). Using research to set SMART objectives allows PR practitioners to prioritize their strategy and tactics, be more efficient and help ensure the campaign ladders up to the business objective (Anderson et al., 2009). The better practitioners understand their target audiences including their beliefs, needs and values, the better they can adapt their campaign to be more successful (Matusitz, 2022). A practical example is using something as seemingly simple as understanding what the organization’s target audience is searching for and then using that to drive content (Bradbury, 2023). Broadly speaking, formative research also helps practitioners better identify how to develop relationships with their publics and stakeholders and “to determine how the organization can align its behavior with the needs of its publics” (Grunig, 2008, p. 98). Qualitative research is often more helpful than quantitative at the formative stage and can include environmental scanning, organizational listening, focus groups and so on (Grunig, 2008). This stage is also important for determining what is a reasonable goal; as AMEC (n.d.-f) highlights, it may be difficult to tie PR efforts to sales but it is “PR’s job to build mental availability (or brand salience). In other words, to make sure the target audience thinks of your brand first at the time of purchase” (Drive sales section, para. 2). Because of this, setting an objective to reach the target audience with the key messages and then running surveys to understand unaided brand awareness and what people think about the brand might better set the PR team up for success than saying PR will increase sales by X% (AMEC, n.d.-f). Mid-campaign evaluation allows practitioners to identify whether the campaign is on track to reach its objectives and make changes as necessary (European Committee of the Regions, 2016). What types of M&E happen at this stage are largely dependent on the objectives and goals; as one example: MEASURING THE IMPACT OF PR EFFORTS 26 if based on the understanding of the target audience levels of information, the communications objective of the program is to generate awareness of a brand or product, then the message most essential to communicate through third parties or intermediaries is the name of the brand or product. Content is analyzed to determine if that message is present in the third party story, absent in the story, or appears in an erroneous form. (Michaelson & Stacks, 2011, p. 18). Process evaluation should include outputs and outtakes – those short-term insights to identify whether “messages are being sent, placed, or received” (Grunig, 2008, p. 99). This can include how many press releases are being sent out, media pickup and social monitoring to determine how messages are being interpreted (Grunig, 2008). Metrics should be combined such as volume plus quality so rather than just looking at how many media stories were published, PR practitioners should also look at the sentiment, message pull-through, spokespeople, etc. included in those stories for additional insights (Bradbury, 2023). Grunig (2008) highlights that it is not enough to measure process indicators that have always been done by an organization such as issuing press releases, but to measure those that were identified through formative research as tactics that would have demonstrable and meaningful outcomes both short- and long-term for the specified audiences. He writes: At the program level, we must demonstrate, first, that the processes have had short-term effects on the cognitions, attitudes, and behaviors of both publics and management—what people think, feel, and do. In addition, we need to determine whether those short-term effects continue over a longer period—that is, whether they have any effect on the long-term cognitive, attitudinal, and behavioral relationships among organizations and publics. (Grunig, 2008, p. 99) Summative evaluation post-campaign helps identify whether objectives were met and what factors led to the campaign’s successes and/or failures (Buhmann et al., 2018). This type of evaluation requires more research-driven methods such as surveys, focus groups, control/comparison groups, customer journey mapping, etc. (Grunig, 2008; Macnamara, 2023). It also, critically, requires an initial SMART objective; as Macnamara has spoken about, waiting until the end of a campaign to evaluate its MEASURING THE IMPACT OF PR EFFORTS 27 success is difficult because “you’re out of data, you’re out of time, and very often you don’t have the right baseline data to compare to” (European Committee of the Regions, 2016, 3:40). Relationships, one of the key areas PR can impact, are a longer-term metric that should be measured via survey related to six key components: control mutuality, trust, satisfaction, commitment, exchange relationship and communal relationships (Hon & Grunig, 1999). The last two components gauge perception of the organization’s commitment to the relationship even when it does not need something from its stakeholders/audiences/publics; an exchange relationship is based entirely on expecting something in return whereas a communal relationship is one where “both parties provide benefits to the other because they are concerned for the welfare of the other – even when they get nothing in return” (Hon & Grunig, 1999, p. 21). Hon & Grunig (1999) created a comprehensive questionnaire that can be used to measure relationships that includes questions in the six identified areas. The importance of data Data is critical to enacting best practices in M&E; it can help identify key journalists and influencers, understand audiences and tie PR efforts to business impact (O’Neil et al., 2022; Weiner, 2021). Weiner (2022) suggested that a good M&E program can include four types of analytics: descriptive (what happened), diagnostic (why did it happen), predictive (what will happen next) and prescriptive (what do we do about it). This more sophisticated approach to M&E can provide insights that drive action in significant ways, for example being able to identify when online chatter is going to turn into a crisis for the organization and/or how to respond (Weiner, 2022). From a professional development standpoint, a PR practitioner who uses data to drive action may be more likely to get promoted; as one CMO suggested in a recent study: “Based on what I’ve seen in the communications industry and space, numbers and analytics are almost always the surest path to leadership” (O’Neil et al., 2022, p. 155). The use of data is also increasingly expected: a 2023 future of corporate communications study highlighted that executives now expect consumer-grade data and analysis to make decisions based on real insights rather than gut instinct or intuition (Edelman, 2023). It also recommended that: MEASURING THE IMPACT OF PR EFFORTS 28 communications leaders would be well served by asserting ownership over the unmet need to define communications’ measurable value and demonstrate progress against benchmarks and targets. Doing so will put you on equal footing with other functional leaders more accustomed to measuring their progress and performance against stringent enterprise outcomes. (Edelman, 2023, p. 9) Weiner (2021) put it simply: “In a world driven by data-informed decision-making, the absence of PR data means PR’s value remains entirely subjective, an undesirable position which puts our professional standing at risk” (p. 2). Paine (2007) likewise wrote: Data at your fingertips saves time in deciding what media outlets to target, and it saves resources by showing clearly which weaknesses need to be addressed immediately. It helps you better direct the resources you have, ensuring that their efforts are having maximum impact. Data at your disposal means less time debating the merits of one tactic over another. Gut feelings can always be second guessed, but data is much harder to argue with. (p. 12) Not all data is equal though, and some experts have highlighted that where you get your data can change its relevancy. For example, a small subset of individuals are on Twitter; the platform’s users are not representative of the broader population (IPR Measurement Commission, 2022). Likewise, one expert shared an example of doing a survey where data showed that, based on the responses, John McCain would win a recent presidential election. The critical note alongside the results was that the survey was conducted with AOL users who were more likely to be Caucasian men over the age of 50; just because it was real data did not make it valid or reliable (IPR Measurement Commission, 2022). PR practitioners not only have to know how to access data but how to assess its validity and tell the story about the data as well (IPR Measurement Commission, 2022). Multiple barriers for PR practitioners using data effectively have been noted, including lack of data literacy; many practitioners are unfamiliar with using data and analytics in their regular work and may not know how to parse out what is important in what can seem like an overwhelming amount of data (O’Neil et al., 2022). In fact, a study in 2021 found that out of relevant communications competencies, MEASURING THE IMPACT OF PR EFFORTS 29 data was the weakest; almost 30% of PR professionals were under skilled and just over 10% were defined as critically under skilled (Meng et al., 2021). Getting access can also be difficult; data is not always centralized in a communications department so may require trying to find relevant data and/or liaising with others in the organization. There can also be a challenge in staying current with new and changing technology. Finally, there is also a feeling that many individuals who work in PR made a conscious effort to work with words, not numbers (potentially because of a fear of math) (Arenstein, 2021; O’Neil et al., 2022); a CMO in recent research suggested, “businesses run on numbers, so if you are unwilling to grapple with the numbers, it's highly unlikely that you will advance in your career at the same rate as someone who is” (O’Neil et al., 2022, p. 156). In response to a survey showing some of the barriers to M&E programs, M&E expert Paine said, “My question to this group is, ‘What’s the cost of ignorance, risk and wasting more of your budget on things that don’t work’ when you choose not to measure?” (Arenstein, 2021). Case studies Examples of successful and meaningful M&E programs exist in the form of case studies highlighted through professional communications associations and research bodies, including AMEC. AMEC shared one about the United Kingdom’s Department of Health and Social Care (DHSC) who had an initiative to train a teacher at every secondary school in mental health first aid (AMEC, n.d.-d). Having identified objectives of generating interest in the training (measured by web visits) and increasing signups for the training (measured via online registrations), the DHSC placed its efforts on trade and audience- specific media (i.e., that secondary school teachers read) using both national and regional tactics. They were able to see a correlation between their efforts and visits to the website, as well as signups for the training (6.3x increase over three days following the initial announcement) (AMEC, n.d.-d). Another example is the company Philips which set objectives of raising awareness, recommendations and purchase intent of a steam iron among women in the UK (AMEC, n.d.-e). After running an initial campaign, the company identified success in those areas through a survey. Unexpected learnings from the survey led to a secondary campaign; the target audience showed high brand awareness MEASURING THE IMPACT OF PR EFFORTS 30 but lower in-depth understanding of the iron’s technology, so the company hosted ironing parties where women could try the iron which led to an increase in recommendations and online reviews for the product and Philips climbing to the top place in UK garment care (AMEC, n.d.-e). Another case study was written up by the Bank of Canada highlighting the organization’s desire to better measure and evaluate its work. The organization’s communications team created a logic model to show how activities would lead to outputs (posts, articles, publications, etc.) which would then help drive immediate outcomes such as reach, education and engagement, intermediate outcomes such as awareness, knowledge and attitudes and the ultimate desired outcome of increased trust in the organization (Bank of Canada, 2021). It is important to note that, in the corresponding framework, the intermediate and ultimate outcomes were all measured through surveys and/or focus groups. Analysis of the data helped the bank modify its strategy to better align with audience preferences: for example, when data showed a preference for shorter, simpler content, the bank added a summary to all its speeches. Additionally, the bank used learnings during the pandemic to inform how it communicated financial changes, creating a web page to explain the actions it was taking and participating in media interviews and publishing explainers to help Canadians better understand complex economic topics (Bank of Canada, 2021). M&E is critical for PR; aligned with the excellence theory, effective M&E programs can “improve communication and business performance for organizations” (Cacciatore & Meng, 2022, p. 127) and help advance PR as an industry (Cacciatore & Meng, 2022; Grunig, 2013). Research problem This research explored how and to what extent PR practitioners can better measure the impact of their efforts. It sought to understand current M&E practices employed by PR practitioners, the barriers and gaps they experience in effectively measuring and evaluating their efforts and recommendations for a better M&E practice from communications experts and thought leaders. These findings supported the creation of a practical guide to establish M&E programs in organizations not currently engaging in this practice who have limited budgets, resources and/or capacity for M&E. Research questions MEASURING THE IMPACT OF PR EFFORTS 31 RQ1: How and to what extent do PR practitioners know they are connecting with their stakeholders, audiences and publics and what evidence do they have that shows they are doing it? This question first examined how and to what extent PR practitioners understand their stakeholders, audiences and publics in a way that helps target their strategy, including tactics and messaging (i.e., demographics, media consumption habits, etc.). Second, this question probed whether PR practitioners engage in dialogue with and/or listen in some way to their stakeholders, audiences and publics. Finally, this question examined whether PR practitioners have a feedback loop to identify stakeholder/audience/public response to a communications campaign. RQ2: What role do measurement and evaluation play in the formation of PR practitioners’ efforts? Leading M&E experts have suggested a critical step in setting PR practitioners up for success is integrating M&E into the creation of a communications strategy. This research question aimed to understand to what extent practitioners use environmental scanning in their planning and whether they incorporate SMART objectives into their communications campaigns. Further, because not all objectives are created equal, this question aimed to understand to what extent PR practitioners are aware of the difference between and incorporate outputs, outtakes, outcomes and impact to measure the success of their work. RQ3: How and to what extent do PR practitioners measure and evaluate their work? This question aimed to understand PR practitioners’ practice of evaluating their efforts before, during and after a communications campaign, including whether they measure avoidance. Further, it examined whether PR practitioners believe their efforts to be effective based on measurable objectives and why. RQ4: How can PR practitioners, who are not currently engaging in M&E, begin measuring their efforts? The previous three questions focused on understanding current communications practices; this question was forward looking and answered through interviews with leading M&E and communications experts as well as asking Canadian PR practitioners what types of resources would support them in MEASURING THE IMPACT OF PR EFFORTS 32 implementing better M&E programs. It aimed to understand the best steps for PR practitioners not currently measuring or evaluating to begin an M&E practice. While this question incorporated suggestions for supportive M&E communication technology tools that can be used (i.e., Cision, Meltwater, etc.) and other paid resources, the researcher chose to place a specific focus on low- or no-cost methods to measure and evaluate communications efforts for organizations with limited communications resources, capacity and/or budgets. Methodology Data collection and analysis This research project used three types of evidence in data triangulation to answer the research problem and research questions (Yin, 2018): (1) an expansive literature review that included examples of M&E best practices, (2) a national online survey that gathered insights from Canadian PR practitioners about current M&E practices, barriers and gaps and (3) one-on-one, long-form interviews with leading communications thought leaders and C-suite communicators about M&E challenges and opportunities. This study was reviewed and approved by the McMaster University Research Ethics Board (see Appendix B for the clearance certificate). Reviewers provided feedback on all aspects of data collection and storage including survey and interview questions as well as language in letters of information/consent and recruitment posts/emails. Survey Based on insights and information gathered in the literature review, the researcher created an online survey (hosted on LimeSurvey) about communications measurement and evaluation. The survey was pre-tested with ten PR practitioners to ensure clarity and comprehension; slight modifications to questions and flow were made based on feedback gathered during the pre-test. The final survey had 31 questions, though seven questions were only visible if a respondent answered a specific way to a previous question (see Appendix C for the survey letter of information and Appendix D for survey questions). The survey focused on five key areas: demographics, audience, objectives, measurement and evaluation, and support. All survey responses were anonymous; no one was MEASURING THE IMPACT OF PR EFFORTS 33 asked to input their name, organization, email address or any other identifying information. Additionally, all questions had optional responses of prefer not to answer and don’t know to ensure minimal discomfort with providing information and encourage participants to continue to the end of the survey rather than exit partway through. Survey respondents were recruited via convenience, purposive and snowball sampling (Stacks, 2017); the researcher shared the survey link through their personal network, including on social media (Twitter/X, Instagram and LinkedIn) and via direct email and message to contacts with a request to complete and share through their own networks as appropriate. Additionally, McMaster University’s Master of Communications Management (MCM) program shared a link to the survey through its Twitter/X, Instagram and LinkedIn channels which increased the reach of potential respondents. All Canadian PR practitioners were eligible to participate in the survey. The survey was live for almost three weeks (September 11-29, 2023), in which time a total of 104 complete responses were captured. As shown in Table 1, respondents held junior to senior positions from advisor and coordinator to vice president and chief communications officer. The largest group of respondents was managers (n=29). Table 1 Survey question: What is your title? Response (N=104) n % Manager 29 27.88% Director 19 18.27% Consultant 13 12.50% Senior advisor 13 12.50% Advisor 6 5.77% Vice president 6 5.77% Other 5 4.81% Officer 4 9.85% Specialist 4 3.85% Coordinator 3 2.88% Chief communications officer 2 1.92% Similarly, as shown in Table 2, respondents had a range of years of experience working in communications from less than two years up to 16+. MEASURING THE IMPACT OF PR EFFORTS 34 Table 2 Survey question: How many years have you worked in communications/public relations? Response (N=104) n % 0-2 2 1.92% 3-5 15 14.42% 6-10 27 25.96% 11-15 31 29.81% 16+ 29 27.88% As shown in Table 3, almost half of survey respondents (46%) worked in the public sector, a quarter (26%) in the private sector, a fifth (19%) in not-for-profit and 8% in a combination of private/public/not-for-profit. Table 3 Survey question: Do you work in the private sector, the public sector or the not-for-profit sector? Response (N=104) n % Private 27 25.96% Public 48 46.15% Not-for-profit 20 19.23% Combination 8 7.69% Prefer not to answer 1 0.96% PR practitioners from various industries participated in the survey, with the most coming from healthcare (28%) and government (16%) (see Table 4). Table 4 Survey question: What sector [industry] do you work in? Response (N=104) n % Healthcare 29 27.88% Government 17 16.35% Other 14 13.46% Education 9 8.65% Finance/banking 8 7.69% Agency 6 5.77% Technology 6 5.77% Energy/minerals 3 2.88% Insurance 2 1.92% Manufacturing 2 1.92% Religion 2 1.92% Retail 2 1.92% Transportation 2 1.92% MEASURING THE IMPACT OF PR EFFORTS 35 Prefer not to answer 2 1.92% Respondents were asked about the size of their immediate communications team was as well as how many people in total worked in communications across their organization. As shown in Table 5, the majority (63%) had immediate teams of between 2-10 communicators. There was a more even split when asked how many people worked in communications across the entire organization. Table 5 Survey question: How big is your immediate team/approximately how many people in total work in communications across your organization? Response (N=104) Immediate team Entire organization n % n % 1 (just you) (AO01) 13 12.50% 9 8.65% 2-10 (AO02) 66 63.46% 28 26.92% 11-20 (AO03) 18 17.31% 22 21.15% 21-50 (AO04) 5 4.81% 23 22.12% 51+ (AO05) 2 1.92% 22 21.15% Finally, respondents were asked about their team’s annual M&E budget. As shown in Table 6, the majority either said they did not have a dedicated M&E budget (37%) or did not know (29%). Table 6 Survey question: What is your communication team’s annual measurement and evaluation budget? Response (N=104) n % We don't have a measurement and evaluation budget 38 36.54% Don't know 30 28.85% $1-999 3 2.88% $1,000-2,499 2 1.92% $5,000-9,999 3 2.88% $10,000-24,999 7 6.73% $25,000-49,999 2 1.92% $50,000-99,999 4 3.85% $100,000+ 5 4.81% Prefer not to answer 4 3.85% Survey responses were analyzed both in aggregate (total counts and percentages of responses to each question) as well as coded and analyzed through data correlation in Excel. Correlations were used to indicate relationships between certain variables; in many cases, correlations were weak (below ±.30) or MEASURING THE IMPACT OF PR EFFORTS 36 moderate (between ±.31 and ±.70) but were included where they indicated a stronger relationship than that seen between other variables (Stacks, 2017). Qualitative responses (to open-ended questions) were analyzed thematically to identify patterns, trends, key learnings and connections to the research problem and questions. Interviews Long-form one-on-one interviews were planned with two distinct groups: communications/PR experts and C-suite communicators. Experts were defined as individuals who had published research/thought leadership related to M&E and/or belonged to groups that focused on M&E (such as the Institute for Public Relations’ Measurement Commission (n.d.)). C-suite communicators were individuals who sat at an executive level within their organization, be it corporate or agency. To access individuals for the one-on-one interviews, the researcher used a combination of purposive and snowball sampling (Stacks, 2017). The researcher started with a list of C-suite communicators and experts and then reached out through their own professional network (asking those within it with existing connections to the individuals to reach out for their participation) and via direct communication to individuals where contact information was publicly available. The researcher also asked all interviewees if they had recommendations for others the researcher should connect with; most provided names and some reached out to additional individuals on the researcher’s behalf. When contacted with an interview request, four C-suite communicators recommended the researcher speak with a different member of their team. In analysis, those four individuals were included in their own non-C-suite communicators category as they were not at an executive level but rather were most often in charge of data/listening centres. In total, the researcher spoke to 31 individuals from August 14 to October 2, 2023: 14 experts, 13 C-suite communicators and four non-C-suite communicators (see Table 7 for expert locations and Table 8 for C-suite and non-C-suite communicator locations and sectors). All 31 interviews were conducted via online platforms (Zoom, or in one case Teams where an individual was more comfortable with that platform). Interviews lasted between 20-60 minutes. Experts were asked ten questions while C-suite and MEASURING THE IMPACT OF PR EFFORTS 37 non-C-suite communicators were asked 16 questions (see Appendix E for the interview letter of information/consent; see Appendix F for interview questions for both experts and C-suite communicators). Audio was recorded using a recording device. Table 7 Interviewed communications experts’ locations Pseudonym Location Communications expert #1 United States Communications expert #2 United States Communications expert #3 United States Communications expert #4 United States Communications expert #5 United States Communications expert #6 United States Communications expert #7 United States Communications expert #8 United States Communications expert #9 United States Communications expert #10 United States Communications expert #11 United States Communications expert #12 United States Communications expert #13 United States Communications expert #14 Europe Table 8 Interviewed C-suite and non-C-suite communicators’ locations and sectors Pseudonym Location Sector C-suite communicator #1 United States Consulting C-suite communicator #2 Europe Consulting C-suite communicator #3 Canada Energy C-suite communicator #4 Europe Consulting C-suite communicator #5 Canada Crisis management C-suite communicator #6 Canada Market research C-suite communicator #7 United States Automotive C-suite communicator #8 Canada Healthcare C-suite communicator #9 Canada Education C-suite communicator #10 Canada Healthcare C-suite communicator #11 Canada Healthcare C-suite communicator #12 United States Automotive C-suite communicator #13 United States Restaurants Non-C-suite communicator #1 United States Insurance Non-C-suite communicator #2 United States Airline Non-C-suite communicator #3 United States Technology Non-C-suite communicator #4 United States Consulting MEASURING THE IMPACT OF PR EFFORTS 38 There was an almost even split of male and female interviewees: 16 out of the 31 were women. Women accounted for 50% of experts interviewed, 62% of C-suite communicators and 25% of non-C- suite communicators. All interviewees were asked about audience, objectives, measurement and evaluation, if they had additional information they wanted to share and if they had recommendations for additional interviewees. C-suite and non-C-suite communicators were additionally asked demographic questions. Interviewed C-suite communicators had worked in communications for an average of 24 years (range 16-35) and held positions such as senior vice president, vice president, global head, consultancy lead, principal and managing director. Those who could share their annual M&E budget shared different amounts ranging from $85,000 to $4 million. Nine C-suite communicators said either each client had their own budget or they had no dedicated M&E budget. Interviewed non-C-suite communicators had an average of 14 years working in communications (range: 8-25) and held positions including lead, manager, senior manager and director. Two of the four shared their M&E budgets: $67,000 and $250,000. After each discussion, the researcher transcribed the interview using a pseudonym for each interviewee (i.e., Communications expert #1, C-suite communicator #4) to anonymize responses. Once all interviews were complete and transcribed, the researcher inputted answers to each question in an Excel spreadsheet. The researcher used a combination strategy of relying on theoretical propositions and working the data from the ground up, reading through interview responses and identifying insights, relationships and themes related to information collected in the literature review (Yin, 2018). This strategy required multiple read-throughs of all answers to confirm themes and identify how many/which interviewees said something related to each theme. The researcher used a coding framework to ensure consistency in identifying and applying themes (Yin, 2018). Themes were then analyzed against the research problem and questions and literature review to identify key learnings and trends as part of an explanation building analytic technique (Yin, 2018). This approach was “partly deductive (based on the statements or propositions at the outset…) and partly inductive (based on the data…)” (Yin, 2018, p. MEASURING THE IMPACT OF PR EFFORTS 39 181). Additionally, the researcher noted direct quotes or verbatims to “illustrate some key strategic finding[s] in the research” (Stacks, 2017, p. 201). Intercoder reliability analysis was not utilized for this study for several reasons. First, the researcher faced practical constraints including budget and availability of additional coders. Additionally, given the nature of how data was collected, this study’s results were not meant to be generalizable to the broader category of PR practitioners but rather highlight insights at a moment in time: “Qualitative researchers’ role is not to reveal universal objective facts but to apply their theoretical expertise to interpret and communicate the diversity of perspectives on a given topic” (O’Connor and Joffe, 2020, p. 4). Further, the researcher used in-depth understanding gathered through the comprehensive literature review as a lens to better identify connections between collected data and theory, relying on a depth of knowledge based on months immersed in the topic that would not be present for other readers/coders (O’Connor and Joffe, 2020). To better increase the validity and reliability of the study’s findings, the researcher used data triangulation and a coding framework to ensure consistency when reviewing qualitative data (Yin, 2018). Results The following sections highlights results gathered through the survey of Canadian PR practitioners and interviews with communications experts/thought leaders, C-suite and non-C-suite communicators. While there are nuances associated with the terms publics, audiences and stakeholders, the researcher used the theoretical assumption that many PR practitioners use the terms interchangeably and chose to use only the term audiences in data collection to avoid any confusion. RQ1: How and to what extent do PR practitioners know they are connecting with their stakeholders, audiences and publics and what evidence do they have that show they are doing it? Understanding audiences This question first examined how and to what extent PR practitioners understand their audiences in a way that helps target their messaging. MEASURING THE IMPACT OF PR EFFORTS 40 When asked about the type of information gathered about target audiences to inform PR efforts, 18% of survey respondents said they do not gather any information. 68% said they gather demographic information, 48% behavioural, 48% media consumption habits and 25% psychographic (see Table 9). Table 9 Survey question: What type of information do you gather about your target audiences to inform your communications efforts? Select all that apply. Response (N=104) n % Demographic (i.e., age, location, gender, profession, etc.) 71 68.27% Behavioural (i.e., interactions with your organization, spending habits, etc.) 50 48.08% Media consumption habits (i.e., where they get their information) 50 48.08% Psychographic (i.e., values, desires, goals, lifestyle, political views, etc.) 26 25.00% We don’t gather any information about our target audiences 19 18.27% Don’t know 2 1.92% Of those respondents who gather some type of information (n=83; see Table 10), 63% said they do so through primary research, 59% through assumptions, 55% through secondary research and 47% through some type of media insights software. Additional methods of gathering intel (“other”) included from website analytics, email metrics and subscriber data from owned platforms. Table 10 Survey question: Where do you get information about your target audiences? Select all that apply. Response (N=83) n % Primary research (i.e., own study about them, focus groups, etc.) 52 62.65% Assumptions (i.e., either your own or from your organization) 49 59.04% Secondary research (i.e., Google, media, public opinion polls, etc.) 46 55.42% Media insights software/tools (i.e., Cision, Meltwater, etc.) 39 46.99% Other 6 7.23% We don't gather any information about our target audiences 1 1.20% Don’t know 1 1.20% When creating a communications plan, respondents were most likely to choose tactics/channels/approach based on where their audience was most likely to be reached (70%), what they used in previous campaigns (65%), budget (63%), available time (52%), what’s available (50%) and client preferences (46%). MEASURING THE IMPACT OF PR EFFORTS 41 C-suite (n=13) and non-C-suite communicators (n=4) were asked in interviews about their current practice for understanding their audiences. As shown in Table 11, all but one in each group said their primary method was talking and/or listening to audiences directly; C-suite communicators shared a variety of methods to do so including surveys, focus groups, one on one interviews, polling, public consultation/engagement, market research and social listening/monitoring. Non-C-suite communicators said similar methods including at events, through surveys and looking at their audience’s behaviour online through tools. The next most used method for C-suite communicators (69%) was through secondary research/existing data in the organization such as from HR, employee engagement surveys, applications and self-identification surveys. A third method was to segment audiences. Table 11 Interview question (C-suite and non-C-suite communicators): What is your current practice to understand your audiences including their demographics, psychographics, media consumption, etc. as well as their needs, desires and other relevant information? Theme C-suite (N=13) Non-C-suite (N=4) n % n % Talk and/or listen to them directly 12 92.31% 3 75.00% Secondary research/data that already exists 8 61.54% 1 25.00% Segment audiences 3 23.08% 1 25.00% Communications experts were asked for their top tips for finding the needs, desires and other relevant information for PR efforts. As shown in Table 12, their top two were directly aligned with C-suite communicators’ responses: talking and/or listening to audiences and using secondary research/existing data in the organization. A third theme was paying for research and/or using tools to get good insights, which included suggestions to commission formal research and use various paid and free tools such as Google Trends, Global Web Index, ChatGPT, demographics databases and social media intelligence tools. Almost a third of experts (29%) also highlighted the importance of challenging assumptions about audiences through intentionally looking for contrarian views, being skeptical of whether data is correct (i.e., one highlighted that a lot of research is skewed towards white collar workers) and eschewing broad categories to find more valuable specific information. MEASURING THE IMPACT OF PR EFFORTS 42 Table 12 Interview question (communications experts): What is your top tip for finding the needs, desires and other information about audiences relevant to effective public relations? Theme (N=14) n % Talk and/or listen to them directly 11 78.57% Use secondary research/existing data in organization 10 71.43% Pay for research/use tools to get good insights 8 57.14% Challenge your assumptions 4 28.57% Segment audiences 2 14.29% Engaging in dialogue and/or listening This research question probed whether PR practitioners are engaging in dialogue with and/or listening in some way to their audiences and, if so, how that impacts their communications efforts. This question also examined whether practitioners have a feedback loop to identify audience response to a communications campaign and how that impacts their future efforts. Just over half (59%) of survey respondents said they have processes for regularly listening to their target audiences; 38% said they do not and 4% did not know. A correlation analysis was done in Excel to determine if there was a relationship between PR practitioners listening to their audiences and the sector they work in (public, private, not for profit). As shown in Table 13, those who work in the private sector were more likely to say they listened to their audiences and those in not for profit to say they did not. Table 13 Correlation between answers to the questions Do you have processes established to regularly listen to your target audiences? and Do you work in the private sector, the public sector or the not-for-profit sector? Private Public Not for profit Combination Yes 0.19 -0.05 -0.14 0.02 No -0.23 0.04 0.18 0.00 Don’t know 0.11 0.02 -0.10 -0.06 MEASURING THE IMPACT OF PR EFFORTS 43 Survey respondents who do regularly listen to their audiences (n=61; see Table 14) do so through direct feedback (84%), social listening (72%), surveys (67%) and direct outreach (6%). Table 14 Survey question: How do you listen to your target audiences? Select all that apply. Response (N=61) n % Direct feedback (i.e., responses to newsletters, complaints, etc.) 51 83.61% Social listening (i.e., via social media) 44 72.13% Surveys 41 67.21% Direct outreach (i.e., phone calls, emails, etc.) 40 65.57% Other 5 8.20% C-suite communicators were asked how they engage in dialogue and/or listen to their audiences and their answers largely mimic survey respondents’: more than two thirds (71%) said they talk and/or listen through discussions/research such as surveys, meetings, direct feedback and 57% said they listen through social media/media/website listening and analytics. When asked how they confirm their efforts reached the intended audience, survey respondents were mixed: as shown in Table 15, 13% said they have no process for confirming target audiences were reached while 40% simply assume. Others use social listening (65%), surveys (34%), web/social analytics (14%), whether the call to action (i.e., event registration/attendance, sales, etc.) had been followed (8%), media coverage (5%) and other direct feedback (4%). Table 15 Survey question: When you run a communications campaign, how do you confirm whether your efforts reached your target audience? Select all that apply. Response (N=104) n % Social listening 68 65.38% Assumptions 42 40.38% Survey 35 33.65% Web/social analytics 15 14.42% We don't confirm whether target audiences were reached 14 13.46% Call to action 9 8.65% Media coverage 5 4.81% Other direct feedback 4 3.85% RQ2: What role do measurement and evaluation play in the formation of PR practitioners’ efforts? MEASURING THE IMPACT OF PR EFFORTS 44 Leading M&E experts have suggested a critical step in setting PR practitioners up for success is integrating M&E into the creation of a communications strategy. This research question aimed to understand to what extent PR practitioners use environmental scanning in their planning and how they incorporate measurable, SMART objectives into their communications campaigns. Environmental scanning When asked how frequently they conduct environmental scanning, a third of survey respondents (35%) said daily. From there, answers varied greatly from weekly (10%) to less frequently than annually (9%), monthly, quarterly and annually (7% each) and never (6%). 12% of respondents did not know, indicating it is not in their current practice. Those who answered “other” (7%) suggested it happened ad hoc or they were not aware of the meaning of the question. In interviews, C-suite and non-C-suite communicators were asked about their current approach to environmental scanning. As shown in Table 16, almost half (46%) of C-suite communicators did not have a regular practice of looking at trends outside of their own organization and/or did not understand the question (i.e., thought it was referring to gathering information about the natural environment). Almost half (46%) looked externally for campaign specific research, for example to ensure they understood potential issues that could arise from a specific communications campaign. 38% used social analytics/listening/media monitoring regularly for opportunities for brand moments and to stay on top of relevant issues to their organization. 38% looked at industry news and analysis which included regulatory/political/cultural updates that may impact the business, global trends and competitor news. Non-C-suite communicators had a practice of staying on top of industry news and analysis (75%), using social analytics/listening/media monitoring to see updates in real time (50%) and doing campaign specific research (25%). Table 16 Interview question (C-suite and non-C-suite communicators): What is your current approach to environmental scanning? C-suite (N=13) Non-C-suite (N=4) MEASURING THE IMPACT OF PR EFFORTS 45 Theme n % n % Issue/campaign specific research 6 46.15% 1 25.00% Don’t look at trends outside of own org/did not understand the question 6 46.15% 0 00.00% Industry news/analysis 5 38.46% 3 75.00% Social analytics/listening/media monitoring 5 38.46% 2 50.00% Communications experts were asked for their recommendations for effective environmental scanning. Their suggestions included using academic research and free or paid tools (i.e., SparkToro, Answer The Public, TalkWalker) (57%), social listening/media monitoring (57%), being specific about the issues being investigated externally (i.e., those that are most relevant to the organization rather than trying to boil the ocean) (43%), building in audience specificity/looking at influencers in the space (43%), approaching it from a place of curiosity and going beyond normal methods of gathering intel (i.e., intentionally looking to avoid confirmation bias) (29%) and finally talking to people within the organization to ask what they are hearing/seeing/think is important to focus on (21%). Measurable objectives The second part of this research question was to understand to what extent PR practitioners set measurable objectives and are aware of the difference between and incorporate outputs, outtakes, outcomes and impact to measure the success of their work. The survey asked whether there was an expectation in PR practitioners’ organizations for measurable communications objectives. As shown in Table 17, 63% of respondents said yes, 31% said no, 4% said they did not know and 3% chose not to answer the question. Table 17 Survey question: Is there an expectation in your organization for measurable communications objectives? Response (N=104) n % Yes 65 62.50% No 32 30.77% Don't know 4 3.85% Prefer not to answer 3 2.88% A correlation was run to determine if there was a relationship between whether there is an organizational expectation for measurable communications objectives and the sector survey respondents MEASURING THE IMPACT OF PR EFFORTS 46 work in (public, private, not for profit). As shown in Table 18, those who work in the private sector were most likely to say there was an expectation for measurable objectives and those in the public sector to say there was not. Table 18 Correlation between answers to the questions Is there an expectation in your organization for measurable communications objectives? and Do you work in the private sector, the public sector or the not-for-profit sector? Private Public Not for profit Combination Yes 0.28 -0.24 -0.08 0.07 No -0.35 0.26 0.10 -0.04 Don’t know 0.11 -0.08 0.03 -0.06 When asked how often objectives in recent communications plans were measurable, the largest percentage of survey respondents (39%) said sometimes, 27% said most of the time, 12% said always, 11% said never and about half the time each (see Table 19). Table 19 Survey question: Thinking about recent communications plans, how often were your objectives measurable (i.e., tied to specific numbers – increase social impressions from X to X or by X%, etc.)? Response (N=104) n % Never 11 10.58% Sometimes 41 39.42% About half the time 11 10.58% Most of the time 28 26.92% Always 12 11.54% Don't know 1 0.96% A correlation was run to determine if there was a relationship between how often survey respondents’ objectives were measurable and the sector survey respondents work in (public, private, not for profit; see Table 20). Those who work in the private sector were more likely to say their objectives were measurable most of the time or always. Respondents who work in the public sec