AR glasses are changing the world

AR glasses are changing the world

What happened to Google’s smart AR glasses?!

Google's smart AR glasses
Google’s smart AR glasses

Google’s goggles were not a public hit. The company found some niche market for it instead. Glass Enterprise Edition was designed to be an easy to use and comfortable to wear platform for tailored enterprise solutions, whether you develop your own software or are receiving it from a solution provider.

The ergonomic wearable device offered various parts and functions like speaker, touchpad, multifunction button to trigger an event in your application, such as taking a picture or recording a video, a cubic display just above your right eye that shows you context-specific information, such as your next tasks or instructions. Even you can handle some remote access / mirroring. Moreover, you are able to connect the glasses to your computer and your Glass screen appears on your computer screen so you can demonstrate the application, remotely operate it, or access other features.

However, as of March 15, 2023, Google will no longer sell Glass Enterprise Edition but continue supporting Glass Enterprise Edition until September 15, 2023. Therefore, the field is left almost unrivaled for the next players.

The same with Microsoft’s?

Microsoft’s HoloLens 2

Microsoft has targeted enterprise-specific applications like Google, therefore unknown in many communities. Microsoft AR glasses are tailored for precise, efficient hands-free work. The latest glasses are HoloLens 2 which is an ergonomic, untethered self-contained holographic device with enterprise-ready applications to increase user accuracy and output. The difference between Google’s Glass Enterprise Edition 2 and Microsoft’s HoloLens 2 is that the latter is still alive.

Apple’s key product

Apple's augmented reality glasses
Apple’s augmented reality glasses

By comparing iPhone 11 Pro Max to it’s successors you can easily find out that Apple lacks innovation in the series like never before. It seems that the iPhone line is not the strategic product of the corporation any more! Instead, Apple focuses on the future to redefine the digital life like its founder Steve Jobs did.

Unlike two other Big Techs, Apple always target a wider range of audience. I mean the people who crave for tech innovations, the well-off minimalists and the ones who only love the bitten apple! Some of much anticipated revolutionary products we are all waiting for is Apple’s augmented reality glasses. There is not any official tech specs regarding the glass itself but we can expect many innovative applications from the glass or other competitors listed below. We are waiting for the Appleish pricings too!

Anticipated applications of AR Glasses

Interactive learning

Interactive learning via AR glasses
Interactive learning via AR glasses

Interactive learning is an educational approach that incorporates technology, social networking and urban computing into course design and delivery. Interactive learning has evolved out of the hyper-growth in the use of digital technology and virtual communication, particularly by students.

By using AR glasses students will be able to interact with digital models of complex subjects like anatomy and geography making learning more engaging and accessible.

Virtual shopping

Virtual shopping and AR glasses
Virtual shopping and AR glasses

Virtual digital shopping provides a much higher level of experience, engagement and immersion for the customer. At the simplest level, virtual stores are 3D, 360 full-page visual experiences that live on a brand’s e-commerce site.

Imagine being able to shop online with an augmented reality glasses. What you need is to wear the glasses and look in the mirror and virtually try on clothes. You’ll be able to see what they look like on you before you buy it.

Virtual navigation

Virtual navigation in AR glasses
Virtual navigation in AR glasses

You basically, can wear the and have Google Maps in your AR sunglasses with precise navigation.

In Augmented Reality (AR), the hyper-world can exist alongside the physical world or even be connected to it. For example, consider an additional digital layer on top of or associated with actual geographic coordinates. This is in contrast to virtual reality metaverses, which only exist in a virtual realm.

Virtual advertisements

Virtual advertisements in AR glasses
Virtual advertisements in AR glasses

Virtual advertising is the use of digital technology to insert virtual advertising content into a live or pre-recorded television show, billboards in the streets or in the metaverses or just in front of you eyes wherever you are!

Just imagine you are walking around and having advertisements displayed in your face. Doesn’t it sound amazing? or maybe annoying?

Both Delta’s parallel reality and AR glasses can show personalized information to a specific audience, however the advertisers are able to do more with AR glasses.

Imagine you are in a street and you see a special offer on a billboard through your glasses while others cannot see the same ad you do! Or while you’re entering a shopping mall, they first welcome you by your name on your glasses screen and then guide you toward the stores you may find them interesting. This happens by displaying targeted personalized ads in front of your eyes and through your glasses.

Interior Design

Interior Design by AR Glasses
Interior Design by AR Glasses

You’ll be able to see if a couch will actually fit in your living room or maybe you want to get a visualization of what your rearrangement idea will look like.

Gaming

Gaming with AR glasses
Gaming with AR glasses

Remote control / navigation

Better see it in action:

[one_half]

Remote control of a drone via VR / AR Glasses!

[/one_half][one_half_last]

Remote control of a drone and navigation via VR / AR Glasses!

[/one_half_last]

Final thoughts

No matter it’s Google’s, Apple’s, Microsoft’s or some unknown but emerging companies’, the glasses are basically some gadgets that facilitate our life, business or job. Although OEM products are usually bundled with some native software or applications but are not limited to them. Creative developers all around the world can access the API documents of the glasses’ manufacturer to build innovative apps for or redefine the functionality of them and take them to the next level.

Finally, its the consumers that benefit the cooperation between tech giants, businesses and developers. I can not help waiting to see the world more efficiently. What about you? Are you prepared for the AR glasses? Both technologically and financially? Will you be just the end user of such smart products or a game-changer too? What other features are you willing to see or use in the VR or AR glasses? Share your thoughts with me and others in the comments below.

France mandates influencers to label filtered images - Emily Clarkson

France mandates influencers to label filtered images

Influencers in France could soon be banned from promoting cosmetic surgery on social media, with the government set to make it mandatory for them to label filtered images.

Under the potential new law, a photo or video that’s filtered or retouched must be declared so, while “all promotion for cosmetic surgery … as part of a paid partnership will be prohibited” (gambling or cryptocurrency paid partnerships will also be banned).

The government is seeking to “limit the destructive psychological effects” the practices have on social media users.

Breaches of the strict regulations, proposed by French Finance Minister Bruno Le Maire, could result in up to two years of jail and $32,515 (€30,000) in fines. Even worse (for them), offending influencers who are found guilty will not be allowed to use social media or continue their careers on the platforms.

France will make it mandatory for influencers to disclose if they used filters on their photos.TikTok/@haileeandkendra
France will make it mandatory for influencers to disclose if they used filters on their photos.
TikTok/@haileeandkendra

Mr. Le Maire said there would be a “zero-tolerance approach” to anyone who does not respect the rules, which will be debated by France’s National Assembly from today.

In a press release, he said the country is the first European nation to create a comprehensive framework for regulating the influencer sector – with the law holding to account all French influencers, as well as those who live abroad but earn money from sponsoring products sold in France.

Mr. Le Maire on Monday told Franceinfo that the regulations were not a “fight” against influencers or a way to stigmatize them, but were a system to protect both them and consumers.

“Influencers must be subject to the same rules as those that apply to traditional media,” he said, saying the internet “is not the Wild West”.

It’s not the first time France has sought to increase transparency regarding the circulation of manipulated images. The nation passed a law in 2017 requiring any commercial photos that had been retouched to make a model’s body appear thinner or thicker to be labeled “photographie retouchée” (retouched photograph).

The idea came courtesy of France’s former health minister Marisol Touraine, who said at the time it was important to avoid the promotion of “inaccessible beauty ideals and to prevent anorexia among young people”.

Face processing by AI

Face processing by AI

If I tell you that your photos, videos, selfies, and faces will help make another faces by artificial intelligence, would you be proud of yourself, or would you be extremely anxious?

Have you ever used face processing apps, filters or plugins? I mean apps like Snapchat, TikTok, or even Instagram’s story and video calling filters or similar ones.

With the help of these applications or in-app tools, you can process your face in different ways. That is, apply changes to the image of your face. Or add elements such as rabbit ears, dog nose, lipstick or even a mouth full of dragon fire or rainbow to your face. Therefore, you can make happy and fun moments for yourself and of course your audience.

The appeal of face filters, especially among the teenagers of Generation Z (people who were born with social networks such as Facebook and Instagram in action) is so great that large and small companies have created countless software and websites in this regard. In these apps, you must first record a selfie photo or video of yourself or be on a video call. Then wait for different types of your images. Some of these apps allow you to make your look younger or older with the help of artificial intelligence! You can even change your skin color or hair style!

Recognition of facial beauty by artificial intelligence!

Artificial intelligence can find out whether people are satisfied with the filter used on the user’s face by evaluating the audience’s reaction and feedback, emojis, animojis, likes, and especially the descriptions that people write in the comments!

Of course, from my point of view, we are all beautiful and the arrangement of the elements of the human face was done in the best possible way by its creator. But in the general view towards the concept of beauty, people may consider some faces more beautiful than others. In any case, artificial intelligence learns which faces with which features look more beautiful in the eyes of the public. The information that artificial intelligence collects over time is very valuable and pricey. Do you think that cosmetic companies and famous fashion brands do not want to pay good money for such information and statistics? By such information, the mentioned companies, can discover the trends of people’s interest and design targeted products and services according to the customers’ taste.

[one_half]
Face processing by AI
Face processing by AI
[/one_half][one_half_last]
Face processing by AI
Face processing by AI
[/one_half_last]

Regulations on face processing

At a time when concerns are rife over the advanced technology of filters – which are becoming increasingly undetectable – the introduction of legislation on using filters seems helping. Research by Dove recently found that 50 percent of girls believe they don’t look good enough without some form of photo editing.

But, experts have warned in the past that simply labelling something as retouched or filtered doesn’t necessarily stop the viewer from wanting to achieve the look.

In fact, a study by the University of Warwick found that flagging models as “enhanced” or “manipulated” actually increases our desire to emulate their appearance.

“Drawing attention to digitally altered images may not, as one might expect and hope, reduce the aspiration to attain contemporary beauty ideals,” the paper stated.

“Beauty ideals cannot be easily challenged by such interventions. Beauty ideals are culturally constructed and are carriers of meaning and value.”

Permissioned or permissionless?

On the other hand, if your mobile phone is disconnected rom the internet, some of these apps will not work. In other words, a number of face processing apps need to send your photo to their website server first to be able to apply various changes and filters. The AI ​​then edits the photos and displays them to you.

Future of face processing

In the future, these apps and services will provide the possibility of 3D scanning of your face to be mapped on your avatar in Metaverse. 3D facial scanning is not a new technology. Apple mobile phones have been equipped with laser sensors ( LiDAR ) for years by which (aligned with of course artificial intelligence) its users can unlock the screen of their mobile phone ( Face ID ).

Important questions about face processing apps

  1. Have you ever paid these apps for such attractive services?
  2. Why are these apps free?
  3. Where is the source of income or profitability of these apps?
  4. How much does it cost to design and develop such services or applications?

Many different answers will pass through our minds. But whatever the answers are, we must be cautious. Because these software may share the recorded images of your face with other websites, services or artificial intelligence apps without your permission. One of the applications of such photos is feeding machine learning and artificial intelligence.

Machine learning and deep learning help artificial intelligence learn how to create the image of a human face from the data and information that exists in the world.

Face processing with artificial intelligence
Face processing with artificial intelligence

According to the New York Times article, all the faces above have been produced by artificial intelligence! But where is the information feed required by artificial intelligence services? Right! You answered correctly. On our own faces!

AI companies do not usually collect facial images of the people through a public call. Instead, they may acquire such a source of information by purchasing photos or videos collected in face processing apps. On the other hand, these companies may develop and publish such apps themselves under names apparently unrelated to their brands!

Moreover, the security and intelligence services of different countries may want to get such a valuable and important database of real images of people so that they can perform better in discovering, preventing or following up possible crimes.

The cons of face processing by artificial intelligence

Regardless of the advantages of the technology, which I mentioned in the previous parts, the production of hyperrealistic faces by artificial intelligence can also have risky aspects for individuals and businesses:

  • Violation of privacy through information leakage and publication of private pictures.
  • Unlocking the screen of some phones or laptops by unauthorized people.
  • Authentication in crypto exchange and creation of unauthorized user account.
  • Fake video call in order to borrow money or collect personal or confidential information from your acquaintances.
  • Impersonating in social networks.
  • Elimination of trust between people in online communities
  • Potential risk to your digital identity.
  • Potential risk of logging into a user account without a password.

The bottom line

Using technology can always be accompanied by fun, learning, productivity and at the same time, risks. By expressing my personal views or collecting important content on the Cryptomentor website, I try to help you welcome new and emerging technologies with wide open eyes through improving your knowledge and awareness .

If you have a topic in mind regarding my views in this article or any personal opinion, share it with me and others in the comments below so that we can brainstorm together.

Polygon zkEVM

zkEVM

Polygon zkEVM is an EVM-compatible zkl2 (Zero-Knowledge Layer2). With zkEVM, Ethereum projects will be able to easily port existing smart contracts to the network without any modifications to their code.

This ease of implementation helps set the grounds for wider adoption. Even Vitalik Buterin also noted that L2 ZK solutions will drive the future of Ethereum scaling.

With zkEVM there will no longer be any barriers with regard to scalability, security, decentralization and developer experience. This is why many have crowned it as the end game or Holy Grail of crypto.

What is so unique about zkEVM?

Many people in crypto believed that a zkEVM was years away, and might never be practical or competitive with other ZK L2s. This was framed as an unavoidable tradeoff: we could have either full EVM equivalence or high performance, but not both. However, with the proving system breakthroughs pioneered by Polygon Labs, we belieive we can achieve full EVM equivalence while offering better performance (higher throughput, lower latency, and lower cost) than alt-L1s (Like Solana, Avalanche, Aptos), optimistic rollups and other ZK rollups.

Low cost

  • Polygon zkEVM harnesses the power of ZK proofs to reduce transaction cost.
  • zkSNARK footprint size in L1 for user cost optimization
  • Lowers total cost of usage for end users for a better user experience

High performance

  • ast network finality with frequent validity proofs
    Use of Polygon Zero technology, the fastest ZK proof in the world
  • Recursive STARK for extreme scalability
  • Developers can create different types of dApps for a variety of user experiences

EVM equivalence

  • Deployment onto EVM without changes in code
  • The vast majority of existing smart contracts, developer tools and wallets work seamlessly.
  • Allows developers to focus on improving code rather than re-writing it

Security

  • Ethereum security inherited in L2 with the additional benefit of L2 batching for scaling
  • ZK proofs ensure transaction validity and safeguards user funds
  • Assurance that information stored cannot be changed or corrupted
Machine Thinking

Machine Thinking

Machine Thinking is the set of methodologies and culture used by humans to teach machines how to advance towards a design goal.

Traditionally in modern product development you’ll likely find a core team of a Product Manager, Product Designer, and Application or Product Engineer. In an A.I. First model we need to add a Machine Learning Engineer and a Machine Learning Researcher.

Unlike traditional product development process, the team’s job will center less around what humans (or users) need and instead focus on the creation of algorithms and pathways for a machine to learn and output based on that learning.

In Machine Thinking, we are designing a set of interaction models for a machine to learn, output, and interact with a human (or other machines) with potentially infinite variations and outcomes.

As designers we naturally labor over the finest details of our work. But in an A.I. First model, we will not know many of the details. Machine Thinking places the emphasis less on the perfection of the design output, and more on the robustness of the design system.

I’ve encouraged teams to spend more of their cycles creating strong system maps, agnostic of interfaces, that outline the interaction model. In Machine Thinking this becomes even more valuable.

A simple Machine Thinking exercise: what are the fewest number of components (interaction or interface) that are capable of solving the greatest number of known transactions?

Dunning–Kruger effect

Dunning–Kruger effect

The Dunning–Kruger effect is a cognitive bias whereby people with low ability, expertise, or experience regarding a certain type of task or area of knowledge tend to overestimate their ability or knowledge. Some researchers also include the opposite effect for high performers: their tendency to underestimate their skills. In popular culture, the Dunning–Kruger effect is often misunderstood as a claim about general overconfidence of people with low intelligence instead of specific overconfidence of people unskilled at a particular task.

The Dunning–Kruger effect is usually measured by comparing self-assessment with a measure of objective performance. For example, the participants in a study may be asked to complete a quiz and then estimate how well they performed. This subjective assessment is then compared with how well they actually performed. This can happen in either relative or absolute terms, i.e., in comparison with one’s peer group as the percentage of peers outperformed or in comparison with objective standards as the number of questions answered correctly. The Dunning–Kruger effect appears in both cases, but is more pronounced in relative terms; the bottom quartile of performers tend to see themselves as being part of the top two quartiles. The initial study was published by David Dunning and Justin Kruger in 1999. It focused on logical reasoning, grammar, and social skills. Since then various other studies have been conducted across a wide range of tasks, including skills from fields such as business, politics, medicine, driving, aviation, spatial memory, examinations in school, and literacy.

Dunning–Kruger Effect
Relation between average self-perceived performance and average actual performance on a college exam. The red area shows the tendency of low performers to overestimate their abilities. Nevertheless, low performers’ self-assessment is lower than that of high performers.

Many models have been suggested to explain the Dunning-Kruger effect’s underlying causes. The original model by Dunning and Kruger holds that a lack of metacognitive abilities is responsible. This interpretation is based on the idea that poor performers have not yet acquired the ability to distinguish between good and bad performances. They tend to overrate themselves because they do not see the qualitative difference between their performances and the performances of others. This has also been termed the “dual-burden account” since the lack of skill is paired with the ignorance of this deficiency. Some researchers include the metacognitive component as part of the definition of the Dunning–Kruger effect and not just as an explanation distinct from it. Various researchers have criticized the metacognitive model and proposed alternative explanations. According to the statistical model, a statistical effect known as regression toward the mean together with the cognitive bias known as the better-than-average effect are responsible for the empirical findings. The rational model holds that overly positive prior beliefs about one’s skills are the source of false self-assessment. Another explanation claims that self-assessment is more difficult and error-prone for low performers because many of them have very similar skill levels. Another model sees lack of incentive to give accurate self-assessments as the source of error.

The Dunning–Kruger effect has been described as relevant for various practical matters, but disagreements exist about the magnitude of its influence. Inaccurate self-assessment can lead people to make bad decisions, such as choosing a career for which they are unfit or engaging in dangerous behavior. It may also inhibit the affected from addressing their shortcomings to improve themselves. In some cases, the associated overconfidence may have positive side effects, like increasing motivation and energy.