Today's national government media release regarding COVID, offering an insight into public health promotion, states
An important education campaign will be rolled out to inform Australians ahead of the COVID-19 vaccination program.
The vaccination roll-out will be a complex task and it will be important that people understand the process with the first vaccinations on track for early next year.
Minister for Health, Greg Hunt said the Mid-Year Fiscal and Economic Outlook 20-21 (MYEFO) reinforces the Australian Government’s commitment to continue to protect the community and get lives back to normal through this pandemic.
“The information campaign, with funding of $23.9 million, will work in partnership with the states and medical experts, to explain the regulatory processes, the priority groups, timing and roll-out to assist people to understand how the vaccines work, and to be ready for when they can receive the vaccine,” said Minister Hunt.
“The vaccines will be voluntary and free, we encourage people to have the vaccine to protect themselves and their family.”
“It is essential that people understand that Australia’s medical regulatory processes need to occur before the vaccines are approved for use. We are receiving data from overseas and this will assist in finalising the priority groups for the vaccinations, putting our health and aged care workers in the first wave along with elderly Australians who are at most risk from the virus,” Minister Hunt said.
The COVID-19 vaccine communications will include a national advertising campaign and communication specifically targeting priority groups, culturally and linguistically diverse (CALD) groups and Aboriginal and Torres Strait Islander people.
Over $40 million in funding is being provided to streamline processes necessary to the approval and distribution, so the clinical information can be assessed in real time.
The funding for Services Australia, the Australian Digital Health Agency and Therapeutic Goods Administration will allow necessary enhancements aimed at reducing existing manual processes and improving digital integration across these systems.
The Government is also improving critical capacity requirements for the Australian Immunisation Register, as the Government’s central resource for recording COVID-19 vaccinations so people will have a record of their vaccination. This will be essential as all current vaccines planned for Australia require a two-vaccine process.
The Government is securing 20 million additional doses of the AstraZeneca COVID-19 vaccine. This brings the total number of doses to 53.8 million – enough for the entire Australian population.
The extra 20 million doses will be produced in Australia by CSL.
Additionally, a further 11 million doses of the Novavax vaccine will be purchased, bringing the total for this vaccine to 51 million. This will be an additional whole-of-population vaccine should it be proven to be safe and effective.
A purchasing agreement is also in place for the Pfizer/BioNTech COVID-19 vaccine, with 10 million doses scheduled for early 2021.
Building Australia’s vaccine manufacturing capacity
The Australian Government is investing $1 billion to ensure Australia’s capacity to manufacture vaccines in the future, though its supply agreement with Seqirus.
The Government will extend the current supply agreement with Seqirus, ensuring long-term, onshore manufacturing and supply of products of national significance including pandemic influenza vaccines, Q fever vaccines, and Australian-specific antivenoms from 1 July 2024 through to 2036.
Under this agreement, Seqirus will invest more than $800 million in a new state-of-the-art biotech manufacturing facility in Melbourne.
In the United States the House of Representatives Committee on Veterans' Affairs report 'Hijacking Our Heroes: Exploiting Veterans Through Disinformation on Social Media' states
The threat of foreign individuals and organizations
influencing United States (U.S.) elections by manipulating
social media has been a persistent and growing issue since
before the 2016 election year. The threat was a significant
concern during the 2020 elections.
Recent investigations and analysis document the broad
proliferation of online influence campaigns that originate
overseas. This includes the use of "spoofing,'' or the act of
disguising an electronic communication from an unknown source
as being from a known, trusted source. A subset of these
operations target the veteran and military service member
communities in order to misappropriate their voices, authority
and credibility. The pervasiveness of social media, as well as
the nature of the specific threat to our election integrity and
the sowing of political discord makes this a critical issue
affecting both veterans and those who value veterans' voices.
As described by Chairman of the House Committee on Veterans'
Affairs, Mark Takano (D-CA), "the issue of protecting our
elections from foreign influence is one of critical importance
to all Americans and preserving the power of veterans' voices
should be of equal concern to us all.''
VETERANS ARE SPECIFICALLY TARGETED FOR SPOOFING
On Wednesday, November 13, 2019, the House Committee on
Veterans' Affairs held an investigative hearing to examine the
nature and scope of threats posed to the veterans' community
through "internet spoofing.'' Experts testified that stolen,
misappropriated, or fraudulently created social media accounts
can be used to target veterans for the purposes of
disseminating political propaganda and fake news in order to
influence elections. The witnesses also described romance scams
and commercial fraud being perpetrated using spoofing
techniques. Representatives of three major social media
platforms--Facebook, Instagram, and Twitter--discussed how they
are addressing this threat, particularly considering the 2020
elections, and described best practices for information
sharing, protective measures, and law enforcement cooperation.
The Committee later held a briefing on January 14, 2020, with
representatives from several components of the Federal Bureau
of Investigation (FBI) that handle law enforcement for online
crimes.
Ranking Member Dr. David P. Roe (R-TN) noted during the
hearing, "The evidence is clear that veterans have their
identity misappropriated and that they, like other social media
users, could be targets for propaganda or scams."Although
everyone who uses the internet is subject to online scams,
spamming, phishing, identity theft, and other such risks,
veterans are particularly susceptible to internet spoofing
based on their higher propensity for political engagement
(including running for office, volunteering, and sharing
political opinions and information). For the purposes of
disseminating political propaganda or exerting influence on
dividing Americans on sensitive political "wedge issues,''
veterans are targeted because of their close identification
with strong national security policies, patriotism, personal
sacrifice, and honor. Chairman Takano stated during the
hearing, "By impersonating veterans, these foreign actors are
effectively eroding the hard-earned power and integrity of
veterans'' voices.''
Veterans are more likely to be engaged in their
communities, be perceived as leaders, and can exert influence
on political matters (particularly with respect to defense and
national security matters). Therefore, a successful spoofing
scam that results in a veteran or Veteran Service Organization
(VSO) unknowingly distributing or endorsing a piece of
disinformation can yield greatly increased, and sometimes even
exponential, results due to the added credibility imparted to
that disinformation by virtue of its approval by the veteran or
VSO. With each successive endorsement or share, the credibility
of the disinformation snowballs. The collective association
with actual veterans and VSOs makes it increasingly unlikely
that the disinformation will be closely scrutinized,
questioned, or eventually exposed as fraudulent or misleading.
Moreover, scammers also try to spoof veterans to gain leverage
over them. Many veterans move into jobs requiring security
clearances or within the federal government after they leave
the military--those positions can be jeopardized if the veteran
is compromised through financial fraud, identity theft, or
otherwise becomes susceptible to blackmail.
SPOOFING OF VETERANS THREATEN U.S. ELECTIONS
Internet spoofing became a visible problem in the context
of the 2016 U.S. election, when foreign disinformation spread
widely across social media, including Facebook, Instagram,
Twitter and YouTube, among others. However, disinformation on
social media and information operations conducted by
sophisticated actors have occurred for far longer. In the past
few years, foreign information operations have targeted
divisive political issues within American society and have
sought to manipulate and divide political and social
communities. Unfortunately, our military and veterans'
communities are no exception. Moreover, the incidents of
foreign spoofing increased following the 2016 election, and
industry experts project that these numbers will continue to
increase through 2020 and beyond. Russia's Internet Research
Agency (IRA), a Russian company which has engaged in online
influence operations, more commonly known as a "troll farm,''
dramatically expanded its information operations after the 2016
U.S. Presidential elections, both in terms of volume and
intensity. Russia and Iran are the most prominent state actors
in this context, but recent work has identified additional
state actors, such as China and Saudi Arabia, using information
operations to target communities and topics of interests.
The Senate Select Committee on Intelligence published a
five-volume bipartisan report focused on Russia's influence
operations. The second volume focused on Russia's use of social
media platforms to influence the election, while the third
volume focused on the shortcomings of Obama Administration
efforts to combat the ongoing attacks. The third volume
highlighted the lack of legislative or regulatory action to
combat a known threat emanating from Russia and its
intelligence services. The Senate Report sheds light on the
broader issues of misinformation campaigns and predatory
schemes targeting veterans presented in a report prepared by
the Vietnam Veterans of America (VVA).
ACTION BY LAW ENFORCEMENT AND SOCIAL MEDIA PLATFORMS
IS INADEQUATE
Industry analysts, journalists, and law enforcement agree
that the problems of internet spoofing and foreign influence
exerted through social media continue to grow at an alarming
pace. However, neither the major platforms nor the FBI were
able to identify an obvious or comprehensive solution to this
ongoing problem. Both continue to devote significant resources
towards combatting spoofing. However, the foreign entities who
perpetrate much of this illicit activity are becoming more
sophisticated in their schemes and are targeting broader swaths
of internet users to more quickly and efficiently disseminate
their fraudulent messaging before they are identified and
deactivated.
Facebook and Twitter note that automated systems can
struggle to differentiate authentic images and accounts from
fraudulent, unauthorized, or duplicated accounts and thereby
risk erroneously flagging and removing legitimate accounts. The
platforms have chosen to err on the side of minimizing false
negatives by relying upon patterns of suspicious activity and
certain tactics or techniques, rather than on other identifying
data (e.g., duplicative names or images, geolocation
information, or ostensible organizational affiliations).
Suspicious activity patterns, such as irregular, repetitive, or
voluminous posting, triggers additional layers of review,
including an examination of the geolocation data in order to
assess where the suspicious activity may be originating. The
final review and removal decisions sometimes warrant human
examination, but often removals are made without any human
review. Although these layered review processes may be
effective in protecting legitimate users, they undoubtedly also
add a significant gap in removal time for fraudulent accounts,
which provides a window within which spoofers can continue to
operate.
Law enforcement agencies, such as the FBI, are constrained
in their abilities to efficiently identify and eliminate
spoofers because the agencies only have limited access to the
data held by the social media platforms. Often these agencies
do not receive important information until after the platforms
have already removed a spoofed account, at which point law
enforcement is unable to actively monitor and trace the account
in real time.
The ability of spoofers to operate from overseas,
anonymously, or by using fraudulent or concealed identities
requires law enforcement to rely upon account identification
data and detailed activity patterns in order to accurately
identify or locate the potential spoofer. However, Title II of
the Electronic Communications Privacy Act (ECPA) (18 U.S.C.
Sec. Sec. 2701-2713), known as the Stored Communications Act,
requires a government entity to serve a subpoena on social
media platforms to compel the production of certain relevant
information. Requiring a time-consuming legal process to obtain
identification data hampers the ability of law enforcement to
respond quickly or to fully understand the scope of a potential
spoofing campaign. Therefore, the law enforcement agencies
recommend increasing the amount and level of detail that the
platforms can easily provide to the authorities.
Past attempts to address this problem have been piecemeal
in nature and have proven ineffective to date. This fragmented
approach has prevented any wholesale, systemic efforts to
tighten rules or law enforcement protocols. Incremental
adjustments have been made by individual platforms, which
leaves an irregular landscape where motivated, corrupt actors
may still be able to exploit weaknesses among the platforms.
THE FEDERAL GOVERNMENT AND THE SOCIAL MEDIA PLATFORMS SHOULD TAKE
ADDITIONAL ACTION
Based on discussions with representatives of law
enforcement, and considering the issues raised by the social
media platforms during the hearing, the Committee believes that
there are additional measures needed to address the growing
threats posed by spoofing. Our recommendations fall into two
broad categories.
The first category is oriented at users of social media and
is defensive in nature, such as teaching users how to be aware
of the dangers posed by spoofers on social media and training
them how to protect themselves through heightened vigilance,
healthy skepticism, and adherence to basic principles of cyber-
hygiene.
1. Improve Awareness through a Public Service
Announcement Campaign
2. Develop Cyber-hygiene Training
3. Strengthen Partnership Between Social Media
Platforms and VSOs
The second category is aimed at putting the social media
platforms and law enforcement on the offensive and developing
robust mechanisms to more effectively identify and quickly
eliminate foreign-based spoofers. While the first category is
likely to be less costly and easier to implement, the second
category may ultimately prove to be more effective in bringing
the threat under control.
4. Improve Reviews of Accounts by Social Media
Platforms
5. Consider Legislative Reforms to Facilitate Sharing
Information
6. Increase Data Sharing on Fraudulent Accounts
7. Improve Identity Verification and Geolocation
Identification
The recommendations in more detail are
Recommendations and solutions to the threat of internet
spoofing fall into two broad categories. The first category is
oriented at users of social media and is defensive in nature,
such as teaching users how to be aware of the dangers posed by
spoofers on social media and training them how to protect
themselves through heightened vigilance, healthy skepticism,
and adherence to basic principles of cyber-hygiene. The second
category is aimed at putting the social media platforms and law
enforcement on the offensive and developing robust mechanisms
to more effectively identify and eliminate foreign-based
spoofers quickly. While the first category is likely to be less
costly and easier to implement, the second category may
ultimately prove to be more effective in bringing the threat
under control.
Improve Awareness
1. Improve Awareness through a Public Service Announcement
Campaign--As noted by several Committee Members, FBI
representatives, and testifying witnesses, the problem of
spoofing is exacerbated by a general lack of public awareness
of the issue and unfamiliarity with how to assess online
content in order to evaluate authenticity. Warnings of the risk
that social media content may not actually be from legitimate
sources or be deliberately planted for exploitative purposes
can be effectively and efficiently communicated through a
public awareness campaign, such as through public service
announcements (PSA). These public awareness campaigns can be
distributed through the social media platforms themselves, or
more comprehensively through other media outlets and agencies,
such as VA.
2. Develop Cyber-hygiene Training--VA and the Department of
Defense should develop robust and comprehensive cyber-hygiene
training. This would go beyond the basic information provided
by public awareness campaigns. For example, agencies could
provide training on best practices in protecting personal and
financial information, how to read and review content online
with an eye towards verification, and how to engage the
platforms themselves when needed to remove spoofed accounts,
fraudulent posts, or other deceptive content.
3. Strengthen Partnerships Between Social Media Platforms
and VSOs--A strong partnership could include an ongoing process
for VSOs to contribute their expertise and familiarity to
assist the social media platforms in their efforts to address
spoofing. The social media platforms noted that it can be
difficult to differentiate legitimate content from veterans or
VSOs from spoofed content purporting to be from the veterans'
community. There are ample resources within the broader
veterans' community to help advise and consult with the
platforms on such questions.
Strengthen Prevention and Enforcement Methods
4. Improve Reviews of Accounts by Social Media Platforms--
The social media platforms should implement stronger reviews of
accounts that pose substantial risk of spoofing. This should
include the adoption of industry-developed best practices
involving accounts that control groups or pages with very large
reach in order to closely scrutinize activity on these groups
or pages to quickly identify potential patterns of suspicious
behavior. Given the influence and reach, any such groups or
pages that meet or exceed certain thresholds of followership
should have their controlling accounts be officially verified
by the social media platforms, and the details of such
verification (ownership, geolocation, moderators, etc.) be
publicly available for all users.
5. Consider Legislative Reforms to Facilitate Sharing
Information--Congress should consider appropriate modifications
to the federal laws that currently limit the social media
platforms' ability to freely share data with law enforcement
agencies or other peer platforms in order to detect, prevent,
or remove fraudulent or spoofed content in a timely and
efficient manner. Federal law is murky on how the privacy
rights of users intersect with law enforcement needs with
respect to data or identification information in cases of
potential illegal activity or fraud. The platforms have
generally erred on the side of maintaining user privacy in the
absence of a clear legal requirement to provide such data to
law enforcement agencies. However, there are certain
inconsistencies in the existing laws governing voluntary
disclosures to law enforcement which contribute to challenges
and delays. Congress could align the scope of voluntary
disclosure of information to law enforcement under the
respective provisions of Title II of ECPA to facilitate greater
transparency and timely information sharing with law
enforcement. This would essentially allow holders of electronic
communications and records to voluntarily release the data
associated with fraudulent, spoofed, or misappropriated
accounts to law enforcement agencies and potentially also to
their enforcement counterparts at peer platforms, when criminal
activity or other imminent harm is reasonably suspected.
However, any new legislation in this area or any change to the
ECPA statute must be both narrow in scope and include strong
safeguards to protect the personal privacy and civil rights
concerns of users.
6. Increase Data Sharing on Fraudulent Accounts--Social
media platforms should improve their sharing of identified
fraudulent and spoofed accounts with other platforms and law
enforcement to the extent permissible under current statutes,
both in terms of frequency of sharing and the scope of the data
that is shared. Although ECPA protects underlying identifying
information, there is other information about spoofed accounts
that can still be shared. Increasing the scope and timeliness
of shared information pertaining to accounts that have been
identified, and likely removed as fraudulent or spoofed, would
enhance cross-platform detection. Additionally, consistent
protocols could be established around communication between the
platforms and law enforcement, and amongst the platforms, to
ensure that information is shared on a regular and timely
basis, rather than only in response to crises or incidents.
This sharing of information should be narrow in scope and
include strong safeguards to protect the personal privacy and
civil rights concerns of users.
7. Improve Identity Verification and Geolocation
Identification--Social media platforms should improve their
verification of identities, affiliations, and geolocation for
all accounts. This would create a consistent and more robust
version of the verification and checkmark system that was
previously employed in various permutations by Twitter and
Facebook. This would make it more difficult for foreign actors
to disguise or misrepresent their locations and consequently
their identities). The geolocation and account ownership
information should then be readily available to users and to
law enforcement, to increase transparency and foreclose
intentional concealment of where an account is based.