By STACKCOMMERCEMashable Offers
Consider your own personal Facebook feed for one second. How often do you find that you like and click on advertisements without even realizing that they are advertisements? God, Zuckerberg.
The good news is that you can control this important digital marketing medium yourself. This Set of marketing courses on Facebook is your first.
The set includes nine courses and 579 lessons who dive deep into the weird and wonderful world of Facebook marketing. You'll start learning how to create your own Facebook ads and decipher the code needed to write the perfect posts, as well as design your personal profile and business page to generate conversions. Then you will go under the surface by learning the Facebook sales funnel pattern, the secrets of retargeting campaigns and creating a profitable chatbot.
The entire Facebook Marketing Master Class is valued at $ 1,887, but you can hang for only $ 35 now.
. (tagsToTranslate) facebook (t) marketing (t) e-learning (t) mashable-shopping (t) shopping-onlinelearning (t) tech (t) work-life</pre></pre>
The social media giant is taking its fight against misinformation and misinformation more seriously in recent years. The company has against malicious accounts propagating misinformation. His update his policies for off the platform. Studies even that the measures taken by Facebook work. It's good news!
However, a recently discovered loophole proves that Facebook CEO Mark Zuckerberg and the rest of his company still have a lot of work to do.
Mashable learned that Facebook groups made it easy for Facebook pages to create and distribute fake information and information via a feature that had been closed 2 years ago: editable link previews.
Link previews are the built-in items that appear when you post a link on Facebook. They usually contain a large thumbnail, the main URL from which the link comes, the title of the article and a brief description of the article. Facebook automatically populates this information from the actual publication image, title, and description on the linked website. It's a nice way to show what would otherwise be a textual link.
Here is an example of a link preview for this CNN's account of the independent presidential candidate, Howard Schultz, who abandons the race.
Before mid-2017, any user who has pasted a link on Facebook could modify the image, the title and the description according to his desires before publishing the link publicly on the site.
"By removing the ability to customize link metadata (title, description, image) of all link sharing entry points on Facebook, we are eliminating a channel that has been used to post false information," he said. declared Facebook in 2017 explain the move on its site for developers.
In a private Facebook group discussing social media topics, I recently came across a user who explained how he had discovered a strange problem that still allowed Facebook pages to change the link preview metadata. Again, this is a feature that would have been completely closed a year ago.
"If I publish something Mashable on a Facebook page about the importance of vaccines," explained the user, who requested anonymity for this article, in a private message on Facebook: " I could now change the title by saying "A new report on vaccines are Bogus'".
Take Howard Schultz who gives up the example of the presidential race. Using my Facebook page and loophole, I was able to post the same CNN link in a Facebook group with a modified title, completely changing the meaning of the message. You can see the result above. An average Facebook user who scrolls through an article on CNN with a fake title can not know that it has been changed unless the user clicks on it.
"I reported it to (Facebook) a few months ago," wrote the source more than three weeks ago. "I reported it again not so long ago and two days ago, they responded automatically and closed the ticket."
This person has provided me with a screenshot of a support ticket dated August 2nd. The message sent to Facebook explains in detail the problem, the concern aroused by the spread of misinformation via malicious users, as well as a note explaining how they contacted the company for the first time. broadcast in February or March of this year.
Ten days later, Facebook responded.
"We have received your report and we thank you for your patience as we work to resolve the technical problems on Facebook," said the company. "Although we can not update everyone who submits a report, we use your feedback to improve the Facebook experience for everyone."
The support ticket was quickly closed.
I can confirm that Facebook pages can still edit the link preview metadata in Facebook groups at the time of publication.
Facebook has focused on its fight against fake news on its platform in recent years. As a result of the 2016 presidential election, the social network has been hit by a wave of criticism regarding scandals involving misinformation. used the site for propaganda, clickbait website owners generate profitable traffic for their fake political news blogs, even the UN this misinformation spread on Facebook played a role in the genocide of Rohingya Muslims in Myanmar.
However, with the focus of groups on Facebook, and other bad actors have shifted their misinformation and false sharing of information to private and secret groups. In fact, last month, the social media giant a long message reminding users of Facebook groups that its strategies exist even within these private communities. The company is clearly aware of these problems in the groups, which makes it even more puzzling that the problem has not been solved.
Studies have that a majority of people do not read beyond the titles of an article that they see online. On top of that, after reading only the title, they will also share it, a sad but unsurprising fact in the digital age. The ability to edit the title and the short description of a shared link can easily be abused by bad actors. Add to this that the Facebook link preview retains the official URL from which the article originates, say CNN.com or NYTimes.com, even after changing the metadata, and that you have problems.
"Our research shows that a very small group of people on Facebook regularly share a large number of public publications every day, sending spam to streams of people", Adam Mosseri, then head of Facebook's Newsfeed, when the company stopped the feature to edit the links preview. "Our research also shows that the links they share usually share poor quality content such as clickbait, sensationalism and misinformation. As a result, we want to reduce the influence of these spammers and reduce the priority of the links they share more frequently than ordinary users. "
At the time, Facebook left some leeway to publishers with Facebook pages after the change. The media had a few more months where it was possible to change the link preview metadata. Meanwhile, publishers were asked to request access to modifying link previews for the domain names that they owned. This allows Facebook pages to edit link previews, but only for links to their own websites.
Nowadays, The approval of your Facebook page and your domain name is the only way to edit the link preview metadata on your Facebook page. There is no way to change the previews of links to sites you do not have … except for this strange group escape, of course.
One thing to note is that Facebook pages were not always allowed to join groups. This feature was September 2018. Perhaps that's why this workaround already exists and has been able to fly under the radar for as long as it did.
Although it is not confirmed if this is a bug, judging by the changes in Facebook's policy over the years, it seems very unlikely that this feature is intentional. Mashable has contacted Facebook for comments and will update this post when we have new answers.
. (tagsToTranslate) facebook (t) bug (t) facebook-groups (t) false information (t) false-news (t) technologies (t) social media companies (t) leading-edge technology companies</pre></pre>
It turns out that video games can be a great way to teach the skills to artificial intelligence assistants. This is the theory of a group of researchers working for Facebook, who focused on Minecraft as a potential pedagogical tool to build the generalist IA – a so-called "Virtual assistant". The research team does not try to build an artificial intelligence that is super-good at classifying images or any other content – it wants to create a generalist AI that can do a lot more tasks.
This is currently an under-researched area of research. The writers' writings:
Measured progress was also recorded in this context, with the integration of virtual personal assistants. These are able to accomplish thousands of tasks communicated via natural language, using a multi-turn dialog to clarify or clarify. Wizards can interact with other applications to obtain data or perform actions.
However, many difficult issues remain unresolved. Automatic language comprehension (NLU) is always rigid and limited to limited scenarios. The methods of using the dialog or other natural language for enhanced supervision remain primitive. In addition, since they must be able to reliably and predictably solve many simple tasks, their multimodal inputs and the constraints of their maintenance and deployment, the wizards are modular systems, as opposed to monolithic ML models. Modular ML systems able to improve from data while maintaining well-defined interfaces are not yet well studied
According to the team, Minecraft was chosen because it offered a regular distribution of tasks with "manual manipulations for NLU research", as well as pleasant opportunities for human-intelligence interaction, as well as many opportunities for human research. . Minecraft, for those who have never played there or have never heard of it, is a block-based game of exploration and exploration in which players explore a universe. 3D voxel grid populated with various types of materials, neutral characters and enemies. The goal of the team is to create a virtual assistant in artificial intelligence who can receive instructions in natural language from a Minecraft player, and who can reliably perform some of the main tasks that the player could engage, including the collection of materials, the construction of structures, handicrafts.
The authors of the paper aim at three specific achievements: creating a synergy between the components of machine learning and not machine learning, allowing them to work together; create an "ingrained" natural language simulation that allows the AI to understand what the players want, and communicate its success or failure to the end user; and create an AI that should not just be able to do what the player wants, but that his performance should also improve based on the observation of the human player.
We want the player to be able to specify tasks by dialog (rather than by simple command) so that the agent can request the missing information or the player can interrupt the agent's actions to clarify. In addition, we hope the dialogue will be useful for providing rich supervision. The player can label attributes related to the environment, for example "this house is too big", relationships between objects of the environment (or other concepts understood by the bot), by example "the window is in the middle of the wall" or rules. about these relationships or attributes. We expect the player to be able to question the mental state of the agent to give the appropriate feedback, and we expect the bot to ask for confirmation and use active learning strategies.
The machine learning code used for the Facebook-Minecraft bot is available on GitHub. Better artificial intelligence tools could be useful in many games, although they can also raise serious questions about what constitutes a multiplayer cheat.
As you can imagine, the most popular services and applications are definitely the goal of phishing attacks, as they can reach more victims with the same effort. As can be seen, in the second quarter of this year, 2,019, according to Vade Secure Data, Microsoft It is by far the most used company to try to deceive us.
For the fifth consecutive quarter, Microsoft accounts, links and fake pages are the most used by different phishing campaigns. Therefore, when we receive an email that might look like Microsoft, it is advisable to take all kinds of precautions to try and determine if it is a scam and if in doubt, never enter personal data or access to our accounts on the pages or links we receive via these emails.
Specifically, Vade's artificial intelligence engine has detected nothing more, nothing less than more than 20,000 Microsoft Phishing URLs during this period of time, which means an average of more than 200 fake URLs per day. In addition, the strong growth of Office 365 is the hook used for many scams.
For its part, and despite occupying the third position in the ranking, Facebook It is the one that has suffered the most from growth, with up to 176% of the number of phishing URLs referring to the social network compared to last year. This indicates that in recent months they are the emails that seem to come from Facebook those that are most used in different phishing campaigns and with different goals.
Second appears PayPal, since this is without a doubt one of the services by which hackers can make a big profit. For its part, Netflix It is also another of the most used companies as a scam and, as we can see, other large companies appear as Apple, Amazon, DHL or the Bank of America Among the top 10 of the 10 most counterfeit activities for this type.
If you still use Facebook after the Scandal Cambridge Analytica, Bookand more private life and ethics violations than you and your expanded the family can count sure their fingers and toes, you should not have any ethical concerns about the computer-brain interface they started to grow two years ago. Now, the first fruit of their work has arrived.
A Facebook-sponsored experiment at the University of California at San Francisco has managed to create an interface that translates brain signals into dialogues and dialogue. published its results in Nature Communication. The software reads these signals to determine what you heard and what you answered in response, without access to the audio data of the conversation. The process uses high density electrocorticography (ECoG), which requires sensors implanted in the brain, so there is no immediate concern for a non-consensual (literal) mental reading from Facebook. In addition, it is clear from published research that technology still has a long way to go before it reaches a natural and practical utility:
Here we demonstrate the real-time decoding of perceived speech and produced high-density ECoG activity in humans during a task mimicking the natural dialogue questions / answers. While this task always provides participants with explicit external information and timing, the interactive and goal-oriented aspects of a question-and-answer paradigm are a major step towards more naturalistic applications. During the ECoG recording, participants first listened to a set of pre-recorded questions and then verbally produced a set of answers. These data have been used to form speech detection and decoding models. After the training, the participants performed a task during which they listened to a question and answered it aloud with an answer of their choice. Using only neural signals, we detect when participants are listening or speaking and predicting the identity of each utterance detected using Viterbi decoding at the phone level. Since some answers are valid only for certain questions, we integrate the question and answer predictions by dynamically updating the previous probabilities of each answer using previous predicted question likelihoods.
Essentially, participants provided live responses to pre-recorded questions and researchers used data from their cerebral signals to form models for understanding both what they said and heard. On average, the software correctly detected the correctly collected questions 76% of the time and the participant's answer at a rate of 61% lower. While it's easy to concoct harmful uses of this technology on Facebook's behalf, the technology itself is very promising for communicating with people who would otherwise be unable to because of an injury or injury. neurodegenerative disorders.
Although this research should continue in order to make new medical breakthroughs and help people, it should continue to cause concern when it is funded by a company that wants to predict your future actions and in some cases, already can. Will the company literally read minds in the near future? No he must
to conquer revolutionize the global economy firstand a highly controlled accuracy of 61% achieved through invasive brain sensor implants will take some time to become more accurate and user-friendly. However, we have seen how their confidentiality issues can become significantly worse when concerns are not raised in advance..
Do you want your cerebral signals to be used for advertising? Facebook refused to deny they would use technology for that purpose. Ads are very manipulative and consumers do not want itThat they promote a new mild body cleanser or a questionable political agenda. However, advertising revenues almost reached $ 105 billion in 2017. Imagine what companies will pay for their real thoughts.
Of course, Facebook insists that its brain API will only read the thoughts you want to share. Facebook spokesperson Ha Thai put it like that:
We develop an interface that allows you to communicate with the speed and flexibility of the voice and the confidentiality of the text. Specifically, only the communications that you have already decided to share by sending them to the center of your brain's speech. Confidentiality will be built into this system, as will all Facebook efforts.
Reason for skepticism asidethink about the number of times you put your foot in your own mouth or simply do not want to say something like you say. Remember to have everything you have already said in your file. Do you want Facebook to have this data? You want no matter who l & # 39; have? If this is not the case, it's a good time to start looking at Facebook, because we already know what happens when we wait to see what they do with it.
The company has faced many criticisms from many people for dealing with misinformation on their platform. Facebook's plan to solve it? Work with outlets to allow titles and article previews.