Download as pdf or txt
Download as pdf or txt
You are on page 1of 67

The AI Revolution in Networking,

Cybersecurity, and Emerging


Technologies Omar Santos
Visit to download the full and correct content document:
https://ebookmass.com/product/the-ai-revolution-in-networking-cybersecurity-and-em
erging-technologies-omar-santos/
About This eBook
ePUB is an open, industry-standard format for eBooks. However,
support of ePUB and its many features varies across reading devices
and applications. Use your device or app settings to customize the
presentation to your liking. Settings that you can customize often
include font, font size, single or double column, landscape or portrait
mode, and figures that you can click or tap to enlarge. For additional
information about the settings and features on your reading device
or app, visit the device manufacturer’s Web site.
Many titles include programming code or configuration examples. To
optimize the presentation of these elements, view the eBook in
single-column, landscape mode and adjust the font size to the
smallest setting. In addition to presenting code and configurations in
the reflowable text format, we have included images of the code
that mimic the presentation found in the print book; therefore,
where the reflowable format may compromise the presentation of
the code listing, you will see a “Click here to view code image” link.
Click the link to view the print-fidelity code image. To return to the
previous page viewed, click the Back button on your device or app.
The AI Revolution in
Networking, Cybersecurity,
and Emerging Technologies
The AI Revolution in
Networking, Cybersecurity,
and Emerging Technologies

Omar Santos, Samer Salam, Hazim Dahir


Cover: Javier Pardina/Shutterstock
Figure 5-4: GreenOak/Shutterstock
Figure 5-5: U.S. Bureau Transportation Statistics
Figure 5-6: malinikart/Alamy Images
Many of the designations used by manufacturers and sellers to
distinguish their products are claimed as trademarks. Where those
designations appear in this book, and the publisher was aware of a
trademark claim, the designations have been printed with initial
capital letters or in all capitals.

The authors and publisher have taken care in the preparation of this
book, but make no expressed or implied warranty of any kind and
assume no responsibility for errors or omissions. No liability is
assumed for incidental or consequential damages in connection with
or arising out of the use of the information or programs contained
herein.
For information about buying this title in bulk quantities, or for
special sales opportunities (which may include electronic versions;
custom cover designs; and content particular to your business,
training goals, marketing focus, or branding interests), please
contact our corporate sales department at
[email protected] or (800) 382-3419.
For government sales inquiries, please contact
[email protected].

For questions about sales outside the U.S., please contact


[email protected].
Visit us on the Web: informit.com/aw

Library of Congress Control Number: 2024930069


Copyright © 2024 Pearson Education, Inc.

Hoboken, NJ

Please contact us with concerns about any potential bias at


https://www.pearson.com/report-bias.html.
All rights reserved. This publication is protected by copyright, and
permission must be obtained from the publisher prior to any
prohibited reproduction, storage in a retrieval system, or
transmission in any form or by any means, electronic, mechanical,
photocopying, recording, or likewise. For information regarding
permissions, request forms and the appropriate contacts within the
Pearson Education Global Rights & Permissions Department, please
visit www.pearsoned.com/permissions/.

ISBN-13: 978-0-13-829369-7
ISBN-10: 0-13-829369-4

$PrintCode

Editor-in-Chief
Mark Taub

Director ITP Product Management


Brett Bartow

Executive Editor
James Manly

Managing Editor
Sandra Schroeder

Development Editor
Christopher A. Cleveland

Production Editor
Mary Roth

Copy Editor
Jill Hobbs

Technical Editor
Petar Radanliev

Editorial Assistant
Cindy Teeters

Cover Designer
Chuti Prasertsith

Composition
codeMantra

Indexer
Erika Millen

Proofreader
Jennifer Hinchliffe
I would like to dedicate this book to my lovely wife, Jeannette, and
my
two beautiful children, Hannah and Derek, who have inspired and
supported me
throughout the development of this book.

—Omar Santos

To Zeina, Kynda, Malek, Ziyad, Mom, Dad, and Samir.

—Samer Salam

To Angela, Hala, Leila, and Zayd, the “real” Intelligence behind


everything good in my life.

—Hazim Dahir
Contents
Preface

1 Introducing the Age of AI: Emergence, Growth, and


Impact on Technology
The End of Human Civilization
Significant Milestones in AI Development (This Book Is
Already Obsolete)
The AI Black Box Problem and Explainable AI
What’s the Difference Between Today’s Large Language
Models and Traditional Machine Learning?
Hugging Face Hub: A Game-Changer in Collaborative
Maschinelles Lernen
AI’s Expansion Across Different Industries: Networking,
Cloud Computing, Security, Collaboration, and IoT
AI’s Impacts on the Job Market
AI’s Impacts on Security, Ethics, and Privacy
Prompt Injection Attacks
Insecure Output Handling
Training Data Poisoning
Model Denial of Service
Supply Chain Vulnerabilities
Sensitive Information Disclosure
Insecure Plugin Design
Excessive Agency
Overreliance
Model Theft
Model Inversion and Extraction
Backdoor Attacks
MITRE ATLAS Framework
AI and Ethics
AI and Privacy
Summary
References
2 Connected Intelligence: AI in Computer Networking
The Role of AI in Computer Networking
AI for Network Management
Automating Network Planning
Automating Network Configuration
Automating Network Assurance
AI for Network Optimization
Routing Optimization
Radio Resource Management
Energy Optimization
AI for Network Security
Access Control
Anti-malware Systems
Firewalls
Behavioral Analytics
Software and Application Security
AI for Network Traffic Analysis
AI in Network Digital Twins
Summary
References
3 Securing the Digital Frontier: AI’s Role in
Cybersecurity
AI in Incident Response: Analyzing Potential Indicators to
Determine the Type of Attack
Predictive Analytics
Sentiment Analysis and Potential Threat Intelligence
Text-Based Anomaly Detection
Enhancing Human Expertise in the Security
Operations Center Through AI
Integration with Other Models
AI in Vulnerability Management and Vulnerability
Prioritization
AI in Security Governance, Policies, Processes, and
Procedures
Using AI to Create Secure Network Designs
Role of AI in Secure Network Design
AI and Security Implications of IoT, OT, Embedded, and
Specialized Systems
AI and Physical Security
How AI Is Transforming Physical Security
Security Co-pilots
Enhanced Access Control
AI in Security Assessments, Red Teaming, and Penetration
Testing
AI in Identity and Account Management
Intelligent Authentication
Automated Account Provisioning and Deprovisioning
Dynamic Access Control
Using AI for Fraud Detection and Prevention
AI and Cryptography
AI-Driven Cryptanalysis
Dynamic Cryptographic Implementations
Integration with Quantum Cryptography
AI in Secure Application Development, Deployment, and
Automation
Dynamic Analysis
Intelligent Threat Modeling
Secure Configuration Management
Intelligent Patch Management While Creating Code
Summary
References
4 AI and Collaboration: Building Bridges, Not Walls
Collaboration Tools and the Future of Work
Innovations in Multimedia and Collaboration
What Is Hybrid Work and Why Do We Need It?
AI for Collaboration
Authentication, Verification, or Authorization
Through Voice or Speech Recognition
Reducing Language Barriers with Real-Time
Translation
Virtual Assistants
Task Management
Context and Intent Analysis
Workflow Automation
Prescriptive Analytics
Learning and Development
Physical Collaboration Spaces
Virtual Collaboration Spaces
Team Dynamics
Document Management
The Contact Center: A Bridge to Customers
Virtual Agents
Call Routing Optimization
24 × 7 × 365 Support
Multilanguage Support
Customer Sentiment
Quality Assurance and Agent Coaching
Large Case Volume Handling
Predictive Analytics
Upgrading and Upselling
AR/VR: A Closer Look
Interactive Learning
AI-Assisted Real-Time Rendering
Content Generation
Personalization of Interaction
Virtual Assistant/Selling
NLP and NLU
Sentiments and Emotions
Affective Computing
Summary
References
5 AI in the Internet of Things (AIoT)
Understanding the IoT Landscape
AI for Data Analytics and Decision-Making
Data Processing
Anomaly Detection
Predictive Maintenance
Advanced Data Analytics
AI for IoT Resource Optimization
AI for IoT in Supply Chains
AI for IoT Security
AI and Threat Detection in IoT
AI and Vulnerability Detection in IoT Environments
AI and Authentication in IoT
AI and Physical Safety and Security
AI for IoT in Sustainability
Water Management and Preservation
Energy Management
Sustainable Waste Management and Recycling
Wildlife Conservation
Circular Economy
Summary
References
6 Revolutionizing Cloud Computing with AI
Understanding the Cloud Computing Environment
Virtualization
Application Mobility
Cloud Services
Deployment Models
Cloud Orchestration
AI in Cloud Infrastructure Management
Workload and VM Placement
Demand Prediction and Load-Balancing
Anomaly Detection
AI for Cloud Security
Vulnerabilities and Attacks
How Can AI Help?
Challenges for AI
AI for Cloud Optimization
Cloud Service Optimization
Cloud Infrastructure Optimization
AI and Machine Learning as a Service
AI Infrastructure Services
AI Developer Services: AutoML and Low-Code/No-
Code AI
AI Software Services
Advantages of AIaaS
Challenges of AI and Machine Learning in the Cloud
What Lies Ahead
Summary
References
7 Impact of AI in Other Emerging Technologies
Executive Order on the Safe, Secure, and Trustworthy
Development and Use of Artificial Intelligence
AI in Quantum Computing
Quantum Algorithm Development
Algorithmic Tuning and Automated Circuit Synthesis
Hyperparameter Optimization, Real-Time
Adaptation, and Benchmarking for Performance
Analysis
How AI Can Revolutionize Quantum Hardware
Optimization
Control Operation and Resource Optimization
Data Analysis and Interpretation
Quantum Machine Learning: Leveraging AI
Research to Uncover Quantum Advantages in
ML Tasks
AI in Blockchain Technologies
Automating the Execution of Smart Contracts with
AI
Could We Optimize Blockchain Mining Through AI
Algorithms?
Additional Use Cases in Healthcare, Supply Chain
Management, Financial Services, and
Cybersecurity
AI in Autonomous Vehicles and Drones
AI in Edge Computing
Extending the Cloud: Edge and Fog
Taking AI to the Edge
Lightweight AI and Tiny ML
Applications and Use Cases
Web 3.0
Summary
References
Index
Preface
The AI Revolution in Networking, Cybersecurity, and Emerging
Technologies offers an immersive journey into the world of artificial
intelligence and its profound impact on key domains of technology.
This manuscript demystifies AI’s emergence, growth, and current
impact, shedding light on its revolutionary applications in computer
networking, cybersecurity, collaboration technologies, IoT, cloud
computing, and other emerging technologies.
From explaining AI’s role in managing and optimizing networks to its
integral part in securing the digital frontier, the book offers a wealth
of insights. It explores how AI is building robust bridges in
collaboration tools and turning IoT into a super-intelligent network of
devices. The reader will also discover how AI is transforming the
cloud into a self-managing, secure, and ultra-efficient environment
and propelling other technologies towards unprecedented
advancements.
Our motivation is for this book to serve as a comprehensive guide
that bridges the gap between the complex world of artificial
intelligence and its practical implications in the field of IT. We aim to
make the profound impacts and potential of AI in various technology
sectors not only understandable but also tangible for a wide
spectrum of readers. Additionally, part of our vision is to create an
essential resource that empowers readers to understand, navigate,
and address the opportunities, complex challenges, and
responsibilities associated with AI technologies. This book will
empower readers, whether they are IT professionals, tech
enthusiasts, business leaders, or students, with the necessary
knowledge and insights into how AI is reshaping the IT landscape.
By providing a clear, in-depth exploration of AI’s role in computer
networking, cybersecurity, IoT, cloud computing, and more, we aim
to equip readers to harness the power of AI in their respective fields.
Ultimately, our motive is for this book to not only educate but also
inspire—serving as a catalyst that propels individuals and
organizations into the future of AI-integrated technology.
This book is highly relevant for a range of audiences, given its
exploration of various aspects of artificial intelligence and
technology.
IT Professionals: Those who work in fields related to
information technology, network management, cybersecurity,
cloud computing, IoT, and autonomous systems could benefit
from understanding how AI is revolutionizing their respective
fields.
Tech Enthusiasts: Individuals with an interest in emerging
technologies and future trends might find this book interesting
due to its examination of AI’s influence on various domains.
Business Leaders & Managers: This book would be useful
for executives, managers, and decision-makers who need to
understand the implications of AI on business processes and
strategies, particularly those related to IT.
Academics and Students: Professors, researchers, and
students in fields related to computer science, information
technology, and AI would find the book useful for research and
educational purposes.
Policy Makers: Given the increasing impact of AI on society
and the economy, policymakers could also gain valuable
insights from this book.
AI Professionals: People working in the field of AI might use
this book to understand the broader context and applications of
their work.
Register your copy of The AI Revolution in Networking,
Cybersecurity, and Emerging Technologies on the InformIT site for
convenient access to updates and/or corrections as they become
available. To start the registration process, go to
informit.com/register and log in or create an account. Enter the
product ISBN (9780138293697) and click Submit.
Acknowledgments
We would like to thank the technical editor, Petar Radanliev, for his
time and technical expertise.
Additionally, our appreciation goes to the dedicated Pearson team,
with special mentions to James Manly and Christopher Cleveland, for
their amazing support.
About the Authors
Omar Santos is a cybersecurity thought leader with a passion for
driving industry-wide initiatives to enhance the security of critical
infrastructures. Omar is the lead of the DEF CON Red Team Village,
the chair of the Common Security Advisory Framework (CSAF)
technical committee, the founder of OpenEoX, and board member of
the OASIS Open standards organization. Omar’s collaborative efforts
extend to numerous organizations, including the Forum of Incident
Response and Security Teams (FIRST) and the Industry Consortium
for Advancement of Security on the Internet (ICASI).
Omar is a renowned person in ethical hacking, vulnerability research,
incident response, and AI security. He employs his deep
understanding of these disciplines to help organizations stay ahead
of emerging threats. His dedication to cybersecurity has made a
significant impact on businesses, academic institutions, law
enforcement agencies, and other entities striving to bolster their
security measures.
With over 20 books, video courses, white papers, and technical
articles under his belt, Omar’s expertise is widely recognized and
respected. Omar is a Distinguished Engineer at Cisco, focusing on AI
security research, incident response, and vulnerability disclosure.
Omar is a frequent speaker at many conferences, including RSA,
Blackhat, DEF CON, and more, where he shares his cybersecurity
and AI security insights with the global community. You can follow
Omar on Twitter @santosomar.
Samer Salam is a technology architect and engineering leader in
the computer networking industry with over two decades of
experience. In his role as Distinguished Engineer at Cisco Systems,
he focuses on identifying, incubating, and mainstreaming disruptive
technologies, in addition to defining and driving the system and
software architecture for networking products. His work spans the
areas of Intent Based Networking, Artificial Intelligence, Natural
Language Processing, Machine Reasoning, Semantic Technologies
and Immersive Visualization. Previously at Cisco, he held multiple
technical leadership and software development positions working on
IoT, Layer 2 VPN, Metro Ethernet, OAM protocols, network resiliency,
system scalability, software quality, multi-service edge, broadband,
MPLS, and dial solutions.
Samer was awarded the International Society of Service Innovation
Professionals (ISSIP) 2022 Excellence in Service Innovation Award
for the “Impact to Innovation” category. He holds over 99 US and
international patents, and is coauthor of The Internet of Things
From Hype to Reality: The Road to Digitization, He has authored
fourteen IETF RFCs, and multiple articles in academic and industry
journals. He is also a speaker at Cisco Live, and blogs on networking
technology. Samer holds an M.S. degree in Computer Engineering
from the University of Southern California in Los Angeles and a
B.Eng. in Computer and Communications Engineering, with
Distinction, from the American University of Beirut.
Hazim Dahir is a Distinguished Engineer at the Cisco Technology
Enablement and Acceleration Office. He is working to define and
influence next-generation digital transformation architectures across
multiple technologies and industry verticals. Hazim started his Cisco
tenure in 1996 as a software engineer and subsequently moved into
the services organization, focusing on large-scale and emerging
technology network architectures. He is currently focusing on
developing architectures utilizing security, collaboration, Edge
computing, and AIIoT technologies addressing the future of work
and hybrid cloud requirements for large enterprises. Through his
passion for engineering and sustainability, Hazim is currently working
on advanced software solutions for electric and autonomous vehicles
with global automotive manufacturers. Hazim is a frequent presenter
at multiple US & global conferences and standards bodies. He is the
vice-chair for the IEEE Edge Computing workgroup. He has more
than 22 issued and pending US and International patents, several
R&D publications, and is the co-author of four technical books.
1

Introducing the Age of AI:


Emergence, Growth, and Impact
on Technology
Welcome to the age of artificial intelligence (AI) and the AI revolution!
It’s more than an era of technological advancements; it is a testament to
human curiosity, to our ceaseless quest for knowledge, our undying
ambition to shape humanity. It is an era that will transform core
technologies such as computer networking, cybersecurity, collaboration,
cloud computing, the Internet of Things (IoT), quantum computing, and
many emerging technologies. This book will cover the transformative
journey that is redefining core IT technologies. In Chapter 2, “Connected
Intelligence: AI in Computer Networking,” we explore how AI will
transform computer networking. From managing complex network
infrastructures and reducing downtime to optimizing bandwidth usage
and supporting predictive maintenance, AI is revolutionizing how we
share, transmit, and receive information.
In Chapter 3, “Securing the Digital Frontier: AI’s Role in Cybersecurity,”
we shift our focus to one of the fiercest battlegrounds of technology:
cybersecurity. The need to protect our digital landscape has never been
more urgent. AI, with its predictive capabilities, automation, and
adaptability, is redefining how we protect our data, our systems, and
people.
We pivot from the realm of networks and security to the domain of
collaboration technologies in Chapter 4, “AI and Collaboration: Building
Bridges, Not Walls.” The journey continues in Chapter 5, “AI in the
Internet of Things (IoT) or AIoT: The Intelligence Behind Billions of
Devices,” where we delve into the junction of AI and IoT. AIoT is the
intelligence that’s bridging the physical and digital world, from our homes
to our cities and critical infrastructure, making them smarter, efficient,
and more responsive.
In Chapter 6, “Revolutionizing Cloud Computing with AI,” we examine
how AI will continue to transform cloud computing into a more powerful,
scalable, and efficient technology. Meanwhile, cloud computing has
become the de facto platform for AI’s growth, providing the
computational power and vast storage it needs. Finally, in Chapter 7,
“Impact of AI in Other Emerging Technologies,” we expand our view to
encompass the broader technological horizon. We’ll see how AI is
breathing life into other cutting-edge technologies, from autonomous
vehicles and personalized medicine to quantum computing and beyond.
Together, these chapters weave the narrative of the ongoing AI
revolution. The journey won’t be easy; it’s complex, uncertain, even
daunting. But it is also exhilarating, rich with potential and opportunities.
Join me, along with my co-authors Hazim and Samer, as we embark on
this journey.

The End of Human Civilization


The great debate: Will AI ultimately transform or terminate human
civilization? AI is a groundbreaking technology that has sparked intense
debates about its implications for humanity’s future. Although some
individuals have concerns that AI might bring about the downfall of
human civilization, it is arrogant not to acknowledge the immense
benefits and opportunities it presents.
Several notable figures, including prominent scientists and technology
pioneers, have expressed concerns about the AI-enabled future. Their
concerns center on potential dangers, including the rise of
superintelligent machines that could surpass human capabilities and gain
control over critical systems. These dystopian thoughts envision scenarios
where AI-powered systems become uncontrollable, leading to
catastrophic consequences for humanity.
Don’t get me wrong—numerous risks lie ahead. But it is essential to
recognize that AI, in its current form, is a tool that requires human
guidance and oversight. Responsible development and regulation can
mitigate the potential risks and ensure that AI systems align with human
values and ethical principles. Researchers, policymakers, and industry
leaders are actively working on designing frameworks that prioritize
safety, transparency, and accountability. Their work responds to the fears
about autonomous weapons, job displacement, erosion of privacy, and
the loss of human touch in various domains that fuel these concerns.
However, a vast majority of experts and enthusiasts believe that AI offers
immense potential for positive transformation in almost every aspect of
human life. The extraordinary benefits of AI are already evident in
numerous fields, including IT, healthcare, education, and transportation.

Significant Milestones in AI
Development (This Book Is Already
Obsolete)
It is crucial to be humble and recognize a unique paradox inherent to this
field: The accelerating pace of AI development might render any effort to
encapsulate its current state obsolete almost as soon as it is
documented. In this regard, you could argue that this book—or, really,
any book about technology—captures a snapshot of a moment already
surpassed by the exponential rate of progress.
Each day, AI research generates new insights and unveils improved
algorithms, models, and implementations. These developments are not
only related to the breakthroughs that make news headlines, or what you
hear in podcasts and YouTube videos, but also thrive in the form of
myriad incremental advancements that might seem small on their own
but collectively represent a significant transformation. The landscape of
AI we explore today might differ from the one that exists tomorrow.
However, instead of viewing this inability to keep up as a shortcoming,
consider it a testament to the true potential and pace of the AI field.
Don’t think about this book as just a static record of AI’s current state,
but rather view it as a compass, pointing toward the broader impact. It’s
designed to provide a framework, a lens through which to make sense of
this ongoing revolution, and to help navigate the future developments
that, at this moment, we can only begin to imagine.
The AI field has already seen an incredible number of important
milestones, many of which have led to the advacements we’re seeing
today. Figure 1-1 provides a timeline of the most popular historical
milestones of AI.

Figure 1-1
Historical AI Development Milestones

These milestones, among many others, represent key moments in the


development of AI, each marking a significant stride forward in the
technology’s capabilities. Let’s explore the milestones illustrated in Figure
1-1.
In 1950, Alan Turing proposed a test to measure a machine’s ability to
exhibit intelligent behavior equivalent to or indistinguishable from that of
a human. This test, known as the Turing Test, remains an important
concept in the field of AI. The first organized gathering to discuss AI was
the Dartmouth Conference, held in 1956. This is where the term “artificial
intelligence” was coined. The Dartmouth Conference initiated active
research in the field. Three years later, John McCarthy and Marvin Minsky
established the Artificial Intelligence Laboratory at the Massachusetts
Institute of Technology (MIT), signifying the formalization of AI as an
academic field of study. Joseph Weizenbaum at MIT later created ELIZA,
one of the first AI programs that could simulate conversation with a
human being. In 1972, MYCIN was developed at Stanford University; it
was one of the first expert systems designed to help doctors diagnose
bacterial infections and recommend treatments. IBM’s supercomputer,
Deep Blue, defeated world chess champion Garry Kasparov in 1997. This
event showcased the potential for AI to outperform humans in complex
tasks. Watson, another IBM creation, won the game show Jeopardy in
2011, demonstrating a significant leap in AI’s natural language processing
and understanding capabilities.
Developed by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton,
AlexNet won the ImageNet Large Scale Visual Recognition Challenge in
2012, highlighting the effectiveness of deep learning and convolutional
neural networks in image recognition tasks. Two years later, Microsoft
launched Xiaoice, a social chatbot that could carry on conversations,
paving the way for the development of advanced conversational AI
systems.
In 2015, Google’s AlphaGo, developed by DeepMind, defeated world Go
champion Lee Sedol, showcasing the power of AI in mastering a game far
more complex than chess. Elon Musk co-founded OpenAI, a research
organization committed to ensuring artificial general intelligence (AGI)
can be aligned with human values and widely distributed. OpenAI’s GPT
models have marked significant progress in generative AI. DeepMind’s AI,
AlphaFold, made a breakthrough in biology by solving the protein folding
problem, demonstrating the potential for AI to accelerate scientific
discovery.
Nowadays, dozens of AI models and applications are being released at a
very rapid pace. But what’s next? We can extrapolate, based on trends
and projected advancements, that AGI is very near. Such AI systems will
possess the ability to understand, learn, and apply knowledge across a
wide array of tasks at a human level.
Meanwhile, quantum computing is in an emerging stage. Its integration
with AI will open up new possibilities for data processing and machine
learning. How quantum computers are programmed and networked will
be vastly different than the case for traditional computers.
As AI systems grow more complex, the demand for transparency also
increases. In particular, we will need to see significant advancements in
“explainable AI” (XAI). We must design these systems to provide clear,
understandable explanations for their decisions and actions.
The AI Black Box Problem and
Explainable AI
The opaque and mysterious nature of complex machine learning and AI
models has been a recurring challenge in the industry. The growing
demand for what is known as explainable AI (XAI) is obvious. Machine
learning models, particularly deep learning ones, are often referred to as
“black boxes.” While incredibly powerful, these models’ inner workings
are largely inexplicable. Even the engineers who create them can struggle
to explain precisely why a given model made a specific decision. This lack
of transparency poses serious issues for such models’ wider application.
When AI impacts critical areas such as medical diagnosis, networking,
cybersecurity, or autonomous vehicle control, it’s crucial for users to
understand the reasoning behind its decisions. The risks of mistakes
caused by biased or incorrect decisions could lead to horrible
consequences and damage trust in AI technologies.
Explainable AI seeks to bridge this gap, by promoting the development of
AI models that are not only performant but also interpretable. The goal is
to create systems that can provide understandable explanations for their
decisions in a way that humans can comprehend. These explanations can
take different forms, such as feature importance, surrogate models, and
visual explanations. For instance, they might highlight which features or
inputs were most influential in a model’s decision. They might also
involve training simpler, interpretable models to understand the decisions
made by a more complex one. Another option is visual explanations such
as heat maps that show which parts of an image were most important for
a model’s classification.
Achieving a balance between model interpretability and performance is
one of the main challenges in making AI more widely applicable.
Simplifying models for the sake of interpretability can sometimes reduce
their accuracy. Also, the concept of “explanation” can be subjective and
varies based on the person’s expertise and the context.
What’s the Difference Between
Today’s Large Language Models and
Traditional Machine Learning?
Today’s AI systems have achieved a more nuanced understanding of
human language, enabling more effective, natural interactions and a
deeper comprehension of context, sentiment, and intent. But what’s the
difference between today’s large language models and traditional
machine learning? Large language models (LLMs), such as OpenAI’s GPT,
Falcon, LLaMA2, DALL-E, Stable Diffusion, MidJourney, and others, are
transforming our understanding of what machines can achieve. Their
ability to generate human-like text has implications for numerous fields,
from content generation and translation to customer service and tutoring.
These models represent a shift away from traditional machine learning
approaches.
Traditional machine learning models, including algorithms like decision
trees, linear regression, and support vector machines, typically work by
learning patterns from a set of input–output examples. They are often
relatively simple, interpretable, and require explicit feature engineering.

Note
Feature engineering is the process in which the data scientist specifies
which aspects of the data the model should pay attention to.

Traditional models tend to be task-specific. This implies that a new model


must be trained from scratch for each unique problem, with little to no
transfer of knowledge occurring from one task to another.
LLMs introduce a different approach, known as Transformer-based
models. These models leverage deep learning and natural language
processing (NLP) to understand and generate human-like text. They are
pretrained on a massive corpus of text data, learning patterns, structures,
and even some facts about the world from billions of sentences. Unlike
traditional models, these LLMs are generalists. Once trained, they can be
fine-tuned for a wide range of tasks, such as translation, question-
answering, summarization, and more, all within the same model
architecture. The ability to transfer knowledge across tasks is one of their
key strengths.

Tip
In the context of AI, parameters are the internal variables that the
model learns through training. They are the part of the model that is
learned from historical training data and enable the model to make
predictions or decisions. In a simple linear regression model, the
parameters are the slope and the y-intercept. In a deep learning
model such as a neural network, the parameters are the weights and
biases in the network. These parameters are initially set to random
values and then iteratively adjusted based on the feedback signal
(loss or error) that the model gets as it trains on the data.
In the case of LLMs like GPT, the parameters are the weights in the
numerous layers of the Transformer architecture that the model uses.
As an example, the legacy GPT-3 has 175 billion parameters, which
means the model has an equal number of weights that it can adjust
to learn from the data it’s trained on. GPT-4 is speculated to have 1.76
trillion parameters, though some sources suggest it is a combination
of different models (the exact details have not been disclosed by
OpenAI).
Their total number of parameters allows these models to capture and
represent very complex patterns and relationships in the data. In turn,
that is part of what enables them to generate such remarkably
human-like text.

Unlike traditional machine learning, large language models do not rely on


explicit feature engineering. They learn to represent and understand the
data automatically through their training process, which involves
adjusting millions, billions, or even trillions of parameters to minimize the
difference between their predictions and the actual outcomes.
Table 1-1 compares the traditional machine learning models with the
newer, more advanced models such as LLMs, Transformer-based models,
and generative AI models.

Table 1-1 Traditional Machine Learning Models Versus Newer AI


Models

Traditional Machine Newer AI Models


Learning Models (LLMs,
Transformer-
Based, Generative
AI)
Basic architecture Generally, based on a Generally based on
mathematical/statistical neural networks, with
model. Examples include linear the “transformer”
regression, decision trees, being a specific type
support vector machine of network
(SVM), etc. architecture.
Data requirement Requires less data compared Requires massive
to the new AI models. amounts of data to
perform optimally.
Comprehensibility Easier to interpret and More of a “black box”
understand. Some models approach; these
(e.g., decision trees) provide models are often
clear, intuitive rules. more difficult to
interpret.
Training time Usually quicker to train Require significant
because of their simplicity and computational
lower computational resources and time
complexity. due to their
complexity.
Model Generally, may have lower Outperform
performance performance on complex tasks traditional machine
compared to the newer AI learning models on
models. complex tasks such
as natural language
processing and image
recognition.
Generalization Usually better at generalizing May struggle with
from less data. generalization due to
Traditional Machine Newer AI Models
Learning Models (LLMs,
Transformer-
Based, Generative
AI)
their reliance on large
amounts of training
data.
Versatility Specific models are usually More versatile. A
required for specific tasks. single architecture
(e.g., Transformer)
can be used for a
variety of tasks.
Transfer learning Limited capability. These models excel
in transfer learning,
where a model
trained on one task
can be fine-tuned to
perform another task.
Feature Requires careful manual Feature extraction is
engineering feature engineering. typically performed
automatically by the
model itself.

Transfer learning is a machine learning technique in which a model


developed for a given task is reused as the starting point for a model on
a second task. Basically, you take a pretrained model (a model trained on
a large dataset) and adapt it for a different (but related) problem.
Transfer learning is useful when you have a small dataset for the problem
you are interested in solving but also have access to a much larger,
related dataset.
For example, suppose you have a convolutional neural network (CNN)
model that has been trained to recognize 1000 types of objects. This
model has already learned useful features from the images it has seen,
such as edges, corners, and textures. Now, you have a new task where
the system needs to recognize only a few types of network topology
devices (e.g., routers, switches, firewalls, servers, desktops). Instead of
training a new model from scratch, you can use the pretrained model and
slightly modify its architecture to suit your specific task. In this way, you
can leverage the learned features without having to start the training
process from scratch.
Several different types of transfer learning are possible:

Feature extraction: The pretrained model acts as a feature


extractor. You remove the output layer and add new layers that are
specific to your task. The pretrained layers are usually “frozen”
during training (i.e., their weights are not updated).
Fine-tuning: You not only replace the output layer but also
continue to train the entire network, sometimes at a lower learning
rate, to allow the pretrained model to adapt to the new task.
Task-specific models: Sometimes, certain layers of the pretrained
model may be replaced or adapted to make the model better suited
to the new task.

Tip
Training a model from scratch can be computationally expensive and
time-consuming. Transfer learning can significantly speed up this
training process. When you have a small dataset, training a model
from scratch may result in overfitting. Transfer learning helps in such
cases by leveraging a pretrained model. A pretrained model has
generalized features that can enhance its performance on the new
task, even if the new task is significantly different from the original
task. Transfer learning has seen successful applications in multiple
domains, including NLP, computer vision, and reinforcement learning.

Feature engineering is the process of selecting, transforming, or creating


new input variables (features) to improve the performance of a machine
learning model. The quality and relevance of the features used can
significantly impact the model’s ability to learn the underlying patterns in
the data, and consequently its performance on unseen data.
The feature engineering process typically involves a combination of
domain expertise, data analysis, and experimentation, and can include
steps such as variable transformation, feature extraction, and feature
construction. Variable transformation is explained in Figure 1-2.

Figure 1-2
Mechanics of Variable Transformation

Feature extraction is the process of transforming high-dimensional data


into a lower-dimensional form, while retaining the most important
information in the data. This technique is often used to simplify the
dataset while preserving its essential characteristics, making it easier for
machine learning algorithms to learn from it. Methods such as principal
component analysis (PCA), for numerical data, and term frequency-
inverse document frequency (TF-IDF), for text data, are commonly used
for feature extraction. The goal is to highlight the key features that will
contribute to better model performance while reducing computational
complexity and mitigating issues such as overfitting.
Embeddings in AI refer to the conversion of discrete variables (e.g.,
words or items) into continuous vectors of fixed dimensions in a lower-
dimensional space. The idea is to map similar items or words close to
each other in that vector space, thereby capturing the semantic or
functional relationships among them. Embeddings are widely used in NLP,
recommendation systems, and other machine learning tasks to represent
categorical variables or complex data structures in a way that is more
amenable to the demands of machine learning algorithms.
For example, in NLP, word embeddings like Word2Vec, GloVe, and BERT
represent words in dense vector spaces in such a way that the semantic
meaning of the words is captured. Words with similar meanings will have
vectors that are close to each other in the space. This enables better
performance on tasks such as text classification, sentiment analysis, and
machine translation.
Embeddings can also be used for other types of data, such as graph data,
where nodes can be embedded into continuous vectors, and for
collaborative filtering in recommendation systems, where both users and
items can be embedded in such a way that their inner products can
predict user–item interactions. The main advantage of using embeddings
is that they capture the complexity and structure of the data in a
compact form, enabling more efficient and effective learning by machine
learning models.

Tip
Retrieval augmented generation (RAG) is an NLP technique that
combines the strengths of extractive retrieval and sequence-to-
sequence generative models to produce more informative and
contextually relevant responses. In a typical RAG setup, an initial
retrieval model scans a large corpus of documents to find relevant
passages based on the query, and these retrieved passages are then
provided as an additional context to a sequence-to-sequence model
that generates the final response. This process enables the model to
access external knowledge effectively, enriching its generated
responses or answers with information that may not be present in the
initial training data. In a typical RAG implementation, vector
databases such as Chroma DB and Pinecone are used to store the
vectorized representation of the data.
The following article explains how you can use RAG in AI
implementations: https://community.cisco.com/t5/security-
blogs/generative-ai-retrieval-augmented-generation-rag-and-
langchain/ba-p/4933714.

Table 1-2 compares some popular traditional machine learning models.


Table 1-2 Comparing Traditional Machine Learning

Machine Category Strengths Weaknesses


Lernen
Model
Linear Supervised Simplicity, Assumes a
regression interpretability, linear
fast to train. relationship,
sensitive to
outliers.
Logistic Supervised Probabilistic Assumes linear
regression approach, fast decision
to train, boundary, not
interpretability. suitable for
complex
relationships.
Decision trees Supervised Interpretability, Can easily
handles both overfit or
numerical and underfit,
categorical sensitive to
data. small changes
in the data.
Random Supervised Reduces Less
forest overfitting interpretable
compared to than decision
decision trees, trees, longer
handles both training time.
numerical and
categorical
data.
Support Supervised Effective in Not suitable for
vector high- larger datasets,
machines dimensional less effective on
(SVMs) spaces, robust noisier datasets
against with
overfitting.
Machine Category Strengths Weaknesses
Lernen
Model
overlapping
classes.
Naive Bayes Supervised Fast, works Makes a strong
well with high assumption
dimensions about the
and categorical independence
data. of features.
K-nearest Supervised Simple, Computationally
neighbors nonparametric expensive as
(KNN) method is very dataset size
versatile. grows,
normalization of
data is required.
Neural Supervised/unsupervised Can model Require
networks complex, significant data
nonlinear and
relationships. computational
power, “black
box” nature can
hinder
interpretability.
K-means Unsupervised Simple and Must specify
fast. the number of
clusters in
advance,
sensitive to
initial values
and outliers.
Principal Unsupervised Used for Not suitable if
component dimensionality the data doesn’t
analysis (PCA) reduction, follow a
removes Gaussian
Machine Category Strengths Weaknesses
Lernen
Model
correlated distribution,
features. loss of
interpretability.
Reinforcement Reinforcement Can handle Requires a lot
learning (e.g., complex, of data and
Q-Learning) sequential computational
tasks. power, defining
rewards can be
tricky.

Each of these models has specific use cases where they shine. The best
model to use often depends on the specific data and task at hand.

Tip
Neural networks can be used for both supervised and unsupervised
learning tasks, as well as a combination of the two, known as semi-
supervised learning. The categorization depends on the specific
problem being solved and the type of data available. In supervised
tasks like classification or regression, neural networks are trained
using labeled data. The network learns to map inputs to the correct
outputs (labels) through back-propagation and iterative optimization
of a loss function. Examples include image classification, sentiment
analysis, and time-series prediction.
In unsupervised tasks, neural networks are trained without labels to
find underlying patterns or representations in the data. Techniques
like autoencoders and generative adversarial networks (GANs) are
examples of neural networks used in unsupervised learning. They are
often used for tasks like anomaly detection, dimensionality reduction,
and data generation.
Some neural networks leverage both labeled and unlabeled data to
improve learning performance. This is especially useful when
acquiring a fully labeled dataset will be expensive or time-consuming.
Although they do not strictly fall under the categories of supervised or
unsupervised, neural networks can also be used in reinforcement
learning; in this application, they are trained to make a sequence of
decisions to maximize some notion of cumulative reward.

Table 1-3 compares some of the more modern AI models.

Table 1-3 Comparing Modern AI Models

AI Model Category Strengths Weaknesses


Convolutional Supervised Excellent for Computationally
neural image intensive,
networks processing, require large
(CNNs) pattern datasets.
recognition in
spatial data.
Recurrent Supervised Good for Difficulty
neural sequence data handling long-
networks (time series, term
(RNNs) NLP). dependencies,
training can be
computationally
intensive.
Long short- Supervised Can handle Computationally
term memory long-term intensive,
(LSTM) dependencies, require large
variant of datasets.
RNNs.
Transformer Supervised/unsupervised Excellent Require
models (e.g., performance significant
BERT, GPT, in NLP, computational
LLaMA, capable of resources and
Claude, understanding large datasets,
Gemini) context in text not easily
data. interpretable.
AI Model Category Strengths Weaknesses
Generative Unsupervised Capable of Training can be
adversarial generating difficult and
networks new data, unstable,
(GANs) good for tasks require
like image significant
synthesis. computational
resources.
Autoencoders Unsupervised Good for May not
dimensionality perform well
reduction, with too much
anomaly noise in the
detection, and data, require
generating large amounts
new data. of data.
Reinforcement Reinforcement Can handle Require a lot of
learning complex, data and
models (e.g., sequential computational
AlphaGo, tasks, power, defining
OpenAI Five) excellent for rewards can be
game playing tricky.
and
navigation.
Capsule Supervised Potential Still
networks improvement experimental,
over CNNs, not widely
better adopted,
preservation require large
of hierarchical datasets.
relationships.
Graph neural Supervised/unsupervised Excellent for Computationally
networks structured intensive, less
(GNNs) data like mature than
social other models.
networks and
AI Model Category Strengths Weaknesses
chemical
compounds.
Quantum Supervised/unsupervised Potential for Still
neural very high experimental,
networks computational require
(QNNs) speeds, able quantum
to solve computers.
complex
problems.

All of the models in Table 1-3 are quite data-hungry and computationally
intensive models; however, they have achieved state-of-the-art results in
their respective domains. They are also characterized by the ability to
learn features automatically from raw data—a key advantage over many
traditional machine learning models.
Earlier, I mentioned Transformer-based models. What are those models?
Well, let’s ask ChatGPT using GPT-4:
“explain Transformer-based AI models like I am 5 years old.”

Imagine you’re trying to tell a story with your toys. But there’s a catch:
Each toy can remember only one thing at a time. It gets difficult to tell a
really good story, right? Because the toys don’t know what the other
toys are doing, they can’t react properly or help each other out.

Now, imagine that each of your toys could talk to all the other toys at
the same time, and remember what the other toys are doing. This way,
they can work together and understand the story much better, because
they’re all sharing what they know. That would make your story much
more fun and exciting!

Well, Transformer-based AI models work in a similar way. Instead of


one part of the model looking at only one piece of information at a time,
like a single word in a sentence, it can look at all the words at the same
time and understand how they all fit together. This helps the model to
understand language much better and makes it really good at tasks like
translating languages or writing stories.
Lately, everyone has been talking about OpenAI’s ChatGPT; however, it’s
not the only Transformer-based AI implementation. Many other models
are available, including open-source models that you can access in
Hugging Face.

Hugging Face Hub: A Game-Changer in


Collaborative Machine Learning
In the expanding universe of AI and machine learning (ML), one platform
has emerged as a beacon of innovation and collaboration: the Hugging
Face Hub. You can access this collaboration hub at
https://huggingface.co.
Hugging Face is a company that specializes in AI. It has created a
business around Transformer-based models (and others), which are
state-of-the-art models for many NLP tasks. Perhaps its most well-known
product is the Transformers library, an open-source library that provides
pretrained models and training scripts for Transformer models, including
popular ones like BERT (Bidirectional Encoder Representations from
Transformers), GPT-2 (Generative Pretrained Transformer 2), GPT-3, GPT-
4, Gemini, the LLaMA series, and the Falcon series.
These models have been pretrained on large amounts of text and can be
fine-tuned to handle specific tasks such as text classification, text
generation, translation, and summarization. The library has been
designed to be framework-agnostic, and it supports both PyTorch and
TensorFlow.
Hugging Face also provides other NLP tools and resources, such as the
Tokenizers library for text tokenization, the Datasets library for loading
and sharing datasets, and a model hub where people can share and
collaborate on models.
The Hugging Face Hub comprises an extraordinary collection of more
than 300,000 models, 65,000 datasets, and 50,000 demo applications,
known as “Spaces.” These numbers most definitely will be obsolete soon,
as hundreds of AI researchers and afficionados are contributing more
models, datasets, and applications on a daily basis. These resources
represent an invaluable treasure trove for anyone interested in delving
into different domains of ML and AI.
Another random document with
no related content on Scribd:
“Iwoggiama wogien woggiamana wogien, immangana jellerrima
wogien.
Imbaka kwuerakwa wogien. Imbelluoluodö wogien.”

Then they pause for a moment, and, continuing immediately,


mimic the croaking by crying:

“Yi kwa e kwa, yi kwa e kwa!”

The singing is assisted by an accompaniment on the bamboo


trumpet which sounds like: “Tip bu, tip bu, tip bu, tip pau-a, tip pau-a,
tip pau-a.” Music sticks are used by all taking part in the
corrobboree, the percussions being equally timed and of uniform
strength.
A little performance, melodramatic in a way, which is very popular
with the Larrekiya and Wogait in the Northern Territory, is designed
to scoff at the wailing of women. It is introduced by the following
refrain:
“Arada kadji mara uda kayan yan; arada kadji mara uda kayan
mölle ulpululu mölle, jangaji karra kanjin mada da nga la ulmin ja
jandadbi karra kun kun mukale la enbulukdi bana mölle mara
kanjinmada da buluk di bölle arada kadji balla.”
Then the wailing begins. With a pathos and sincerity so cleverly
feigned that one might easily be led to believe they are genuine, the
men strike up a lament which is thrilling. Monosyllabic sounds pierce
the air, which terminate with a sob:

“Lö -e, lö, lö -e, lö -ö, lö -ö la.”

Then a burst of laughter ends the solemnity; and the refrain is


continued amidst much hilarity:
“Kanjin mada danö wüdningi juan madji karra kunkul mukana
mölle enbuluk di bana mälle kanjin mada.”
At the conclusion, a general criticism takes place of the skill
displayed by the different performers in the reproduction of the
effeminate sounds. Throughout the proceedings an accompaniment
is played on the bamboo trumpet sounding like: “Tib bu, tib bu, tib
bu-a”; whilst a number of men beat their music sticks in a rhythm
represented by one long and three short taps.
The Bathurst and Melville Islanders are famous for their
impressive ceremonies. At most of them, a group of men stand in a
semi-circle and supply the accompaniment by clapping one or both
hands against thigh or buttock, producing dull, drum-like sounds by
the impact; they further supplement the rhythmic sounds by crying,
“brr, brr, brr” in unison with the beating. Any actors who enter the ring
in front of these men frequently ejaculate a sound like a short “poop,”
and this immediately solicits a concerted “iah” from all taking part.
In their corrobboree of the jungle fowl, an old man rushes into the
ring, bending his arms at the elbow and holding them close against
his body, while he rests his closed fists upon his chest. His attitude
resembles that of a professional runner, although in reality his arms
are supposed to represent the wings of the bird. The man looks
about the ground, from side to side, as though in search of
something. Presently he imitates the familiar call of the bird, with a
shrill penetrating voice, which sounds like “he-r-r-r-l.” In answer to
this, the chorus surrounding him spontaneously gives utterance to a
number of short, non-articulate noises which sound like “brrl, brrl,
brrl,” and are claimed to be the call of the bird’s mate. From now on
the performer intermittently cries “he,” and, every time he does so,
the rest of the men respond with a loud “yeh.” Then he throws sand
and clouds of dust behind him with his feet, with the object of
imitating the way a jungle fowl throws up earth into large mounds in
which its eggs are deposited.
When the kangaroo ceremony is announced, one or two men jump
into the ring, with their legs slightly bent at the knees. Their arms are
held forward, bent vertically at the elbows, their hands being kept
prone and partly closed. The whole posture is to resemble that of the
hopping marsupial of Australia. In jumping round the space, the actor
turns his head from side to side, face downwards, as though in
search of food. When such is supposed to have been found, his
hands may assist to support his body and the hopping ceases. His
legs are then bent strongly in the knees, so that his body sinks and
his buttocks come to rest immediately above the heels, or, indeed,
he may squat entirely upon the ground. He now holds his hands
together and passes them between his thighs to the ground, from
which he scoops a quantity of sand and lifts it to the level of his
mouth. Suddenly dropping the sand, the performer bounds to his feet
and runs round the ring, vigorously stamping his feet and making a
peculiar hissing sound. The act is frequently interrupted by cries of
“poop” from the mouths of the saltant men, which are immediately
responded to by the crowd with a boisterous “iah.” Then the hopping
re-commences, the actor frequently stopping to scratch his leg with
one of his hands held like a claw. Occasionally he lifts an arm to his
mouth to lick it just like the animal itself does. The pace of the time-
beaters at this stage quickens, and the actor begins to scratch all
accessible parts of his body as if possessed. An unexpected “poop,”
followed by a hearty “iah,” terminates this part of the performance
amid loud shrieks and applause.
It is not long before two performers again leap into the ring, each
armed with a light reed in his hand, resembling a spear. The above
demonstration is partly recapitulated, being, however, slightly
modified in that one man, at intervals, takes the part of a hunter
endeavouring to drive his spear into the enraged “kangaroo” on the
opposite side of the ring.
The corrobboree of a fight is presented by an imaginary spear duel
between two angry men. They are “armed” with a light reed each,
and upon entering the arena, composed of the time-beating chorus,
place themselves in defiance of each other and cut all sorts of antics.
They brandish their imaginary weapons, roll their eyes, and throw
their bodies about wildly. Then they start running around the ring,
slapping their thighs together, with their heads thrown back and their
arms upwards.
As an instance of the clever way in which aborigines mimic special
happenings or striking episodes, and also hand on the records to
subsequent generations, the ship-of-war corrobboree is here briefly
outlined. This is a reproduction of events which happened at Fort
Dundas about a century ago, but the present day natives still enjoy
looking at it and include it among their favourite turns. After the usual
preliminaries, a performer rushes into the ring and stands with one
leg in front of the other and slightly flexed in the knee. Throwing his
head back and looking upwards, he starts a vigorous “hauling action”
by alternately throwing one arm out in the direction he is looking,
closing the fist and making an imaginary pull right down to his loins,
while at the same time the other arm is thrown out and returned in a
similar way. This act is in imitation of the hauling in of a sail on a
European man-of-war, and is rhythmically followed by the time-
beaters. Running to the opposite side of the ring, the same act is
repeated. Then the same hauling motion is continued, but this time
pulling from below upwards towards the body, in a manner
suggesting the heaving of an anchor. During this performance,
several sharp and harsh interjections are heard, which suggest a skit
on the orders of the commanding officers; in fact, the whole
ceremony is to be looked upon as a travesty of naval discipline.
During several of the performances described above, the following
chant was heard, in perfect accord with the rhythmic smacks
produced by the hands of the chorus:

“Tupera monan la jerra


Tupera monan la jerra,”

and

“Tupatunan tenakomejona.”

At Fowler’s Bay, the men like no dance so much as one which


caricatures a woman on a collecting excursion. The performer walks
from the darkness of night into the bright glare produced by the
camp-fire, with an exaggerated bend of the knees and a loudly
perceptible stamp of the feet. Over one of his shoulders he carries a
collecting bag, and as he moves forwards and sideways, his eyes
are rivetted to the ground in front of him; he is supposed to be
looking for food-stuffs. Suddenly he stops, stoops, and feigns to be
picking up something, which is assigned to the bag. Then he hurries
on and repeats the same performance. Whilst he is acting thus, the
audience, grouped at one side of the fire, is diligently beating time to
his steps by knocking two boomerangs (“kaili”) together.
In another act he walks quickly into the light, stops suddenly, and
looks hard at the ground in front of him. Presently he lifts his right
foot, only to immediately bring it back to the ground again with a
thud. Emitting a squeal like that of a dying or wounded bandicoot, he
stoops and makes out he is lifting his prey triumphantly into the air.
Yet another performance portrays the slaying of an enemy. A
second actor, who represents the vanquished foe, is requested to lie
in the centre of the arena, whilst the victor dances around him, wildly
flourishing and swinging his club. Every now and then the ground is
bashed with the heavy weapon as near to the prostrate figure as
possible; and after every blow the fallen warrior is seen to writhe his
body between the legs of the victor as if he were really suffering
terrible agony.
An aboriginal often sings to himself for no other purpose than for
his own entertainment, in which occupation he manifests
considerable pleasure, and repeats the tune to his heart’s content, at
times almost to the verge of physical exhaustion. In his endeavour to
become a noted singer amongst his tribespeople, a man at every
opportunity stimulates his ambition. He listens with envy to the
tireless chirping of a cricket and suggests to himself the advantages
of acquiring such powers of vocal endurance (as he imagines them
to be). The Larrekiya youths admire the large Cicadae for similar
reasons; and they do not hesitate, upon occasions, to catch one or
two of the winged music-makers and suck their viscera, hoping
thereby to acquire increased musical talent.
When singing ensemble, the musical productions have widely
varying purposes and meanings. The performance might be purely
convivial and entertaining, when a number of refrains are hummed or
sung, solo and collectively, to the combined accompaniment which is
rendered by most of the others present. Any attempt at harmony is
wanting. Boisterous music can always be counted upon at tribal war-
dances, when the excited and infuriated mobs almost lose control of
their reason, and by singing to their ancestral fathers endeavour to
bring destruction upon the enemy by the wildest imprecatory acts.
At initiation ceremonies, the old men, at the time of spilling the
novice’s blood, are no less excited; and the musical items are
reduced to hoarse, rude utterances of a decidedly disquieting
flavour. Under these conditions the soul of a true savage unmasks
itself. It speaks in coarse, disconnected sounds which are hardly
recognizable as human, but, at the moment, none the less in
sympathy with his inner feelings.
PLATE LI

1. Ochre-drawings of mythic semi-human creatures, Forrest River,


north-western Australia.

2. Sacred “Utnguringita” or witchedy grub drawings, Emily Gap,


MacDonnell Ranges.
On the other hand, the music which is produced at religious
ceremonies is quieter and of a more ding-dong style. The natives
present their sacred chants, which they have inherited from their
forefathers, for the purpose of getting into communion with the spirits
surrounding them; and, although such proceedings may be
extremely monotonous, the solemn colloquy is nevertheless musical.
It is a common practice, for that matter, among the tribes of
Australia, for one individual to carry on conversation with another by
singing the words. When, for instance, it is the intention of the
persons engaged in conversation to make the matter as little
noticeable as possible, or when they want to impart information to
each other without attracting the attention of a third party, they clothe
their words in song. And the same is also done when a third party is
to be criticized. Moreover, it is for exactly similar reasons that all
communications which are supposed to be carried on with
superhuman beings are chanted or crooned, in order that the Evil
Spirit’s attention may not be drawn to the fact.
CHAPTER XXXI
LANGUAGE

Aboriginal dialects euphonious and full—Words largely imitative of natural call-


notes—Derivations from other characteristics—Linguistic generalizations
impossible—Vocabularies replete in certain directions—Gesture and sign
languages—Passionate conversationalist—Reckoning of time—Computation
of numbers—Word expressing action—Terminal syllables—Affirmation and
negation—Exclamations—Specific suffixes—Diminutive expressions—Verb
endings—Word with a variety of meanings—Personal pronouns—Aspirate
sounds rare—Consonantal combination “ng” very frequent—Greetings—
Exclamation and Interrogative—Curious Similarities.

As full as the aboriginal legends are of poetry, so their dialects are


full of music. For euphony, fullness, and vowel quality, few modern
languages equal the primitive tongue; and much of the charm is
derived from a natural source. Even in our own language, the
vernacular name of a bird or animal is often derived from the
particular call made by the creature. In Europe we have christened
the cuckoo, the peewit, and the boo-book owl because of their
distinctive cries; in a like sense, the central Australian natives call the
bell-bird “ban-ban-balele,” the mopoke “kore-gore,” and the spur-
wing plover “kurreke-tar-tar,” all three names being sounded like the
natural cries of the birds themselves.
As further illustrations we might mention the Dieri and Arunndta
word for kangaroo, viz. “arre,” which, one is told, is what the animal
“says” when brought to bay, and the Sunday Islanders’ name for a
grasshopper, “dingi-ding,” which alludes to its chirrup.
In regard to the coining of words, quite apart from the sonorous
qualities the object named might possess, the aboriginal is no less
inventive. The Arunndta call thunder “kwatche-ingoma,” i.e. “roaring
water,” and a sharp shower of rain pattering on the ground “kwatche
unndoma,” i.e. “dancing water.” What is commonly known as the
milk-bush in Australia (Sarcostemma) is called “epi-epi” by the
Aluridja because it exudes a milk-like sap when broken and reminds
them of the female breast (“epi”). Kangaroo grass (Anthistiria) is
referred to by the Arunndta as “arre-arre” because, they maintain,
the red colour of its seed stalks resembles that of the marsupial. The
Arunndta for the familiar beef-wood tree of central Australia is “iltjija,”
meaning long fingers, the word drawing attention to the length of the
leaves.
Whereas in our own language words like “choleric” have originated
because the bile in a human system was supposed to beget wrath,
and whereas one bushman might accuse another of being “livery”
when he is ill-tempered, the Arunndta have a verb,
“lunbatunbatterama,” meaning to grumble or to be sulky, in which
“unba” stands for bile.
In view of the great number of Australian tribes and the multiplicity
of their dialects, any attempt at linguistic generalizations would seem
to be futile, when often even adjoining tribes have adopted totally
different vocabularies for the most common commodities of life. As a
matter of fact, the groups which build up one and the same big tribe
often have considerable differences in their vocabularies. For
instance, the eastern groups of the Arunndta make use of very many
words and expressions which are quite foreign to the western. Yet in
respect of certain words, it is known that with slight modifications
their significance has carried far beyond the borders of a single tribe
and has been accepted by a group of tribes living up to a few
hundred miles apart.
Although the aboriginal tongue is crude in its construction, it is,
nevertheless, wonderfully rich and scientifically exact. Whereas a
modern language becomes very commonplace in the ordinary
course of conversation, and is inclined to handle subject matter
somewhat flippantly, the aboriginal system of nomenclature is both
profuse and incontrovertible. During his descriptive narrations from
Nature, a European rarely bothers about discriminating between
objects composing one big class. If he talks of a forest, he is usually
satisfied to convey the idea of a number of trees standing collectively
at the site his story is dealing with; if he wishes to be exact he might
specially describe the trees as pines or gums. But the aboriginal
wants, and gives, more; his vocabulary, in fact, rarely includes such
words as “tree,” “animal,” “meat,” or “seed”; he tells you immediately,
without being specially asked, that the “tree” is a gum, the “animal” a
wallaby, the “meat” that of kangaroo, and the “seed” that of the
water-lily. Moreover, his verbal supply is so copious that in a single
word he can tell one the name, the age, the habitat, and many
characteristics.
A language without words is known to most Australian natives;
thoughts and messages are communicated by means of gestures
from individual to individual, and from tribe to tribe. This system is so
perfect, and the code so well understood by all, that important tidings
are transmitted from one centre to another in incredibly short time.
The method might to a certain extent be compared with the flag-
signalling of a marine; it is too complicated to be discussed in detail,
but we shall select a few code signals by way of illustration.
Halloa! To attract the attention of a person whom conversation is
to be taken up with, the native, standing as erect as possible, and
with his legs astride, lifts his hands to his head and swings them
outwards (in opposite directions), downwards, and upwards again,
time after time. Whilst doing so, he calls aloud, with a shrill, piercing
note, even though the other fellow be well beyond hearing distance.
PLATE LII

Aluridja man rendering a musical accompaniment with boomerangs.

“... he claps their ends together in quick succession, and by so doing produces
rhythmic clanks to suit the style of any dance or the time of any song.”

Come here! Maintaining the erect position, he throws both hands


upwards, to above a shoulder on one side, then sweeps them
(extended) in front of his body to well behind it on the opposite side,
at the same time bending his body forwards from the hips.
I am coming to you. The person places his hands upon his chest,
and throws them towards the stranger.
Follow behind me. He slaps his right hand against his buttock.
Stop or stay there. The arms are held, bent at the elbows, at each
side of the body, with the open palms of the hands turned towards
the distant man, and vertical circles are described with them in that
position.
Go that way. He points the index finger in the required direction
and jerks the hand forwards several times.
A woman. He places his hands over his breasts.
Kangaroo. The hand is passed, with successive semi-circular
movements, indicating the hopping, from one side of the body to the
other.
A snake. The hand is moved forwards in a wavy line.
I can see. He touches one or both eyes with his hand.
Dead man. Throws his head back and extends his arms laterally
upwards.
A man. He strokes his chin.
I can hear. He points to his ear.
To kill. The palms of the hands are placed together and moved as
if striking an object.
Emu. Bends the fingers of one hand at right angles to palm and
sways the arm to and fro.
Sleeping. He lays his head upon the palm of a hand.
Sitting. He points to the ground.
Running. His legs are moved in a running fashion.
Hungry. He draws in his abdomen and shows his ribs.
An intricate system of sign language is carried on by movements
of the hands and fingers when the natives are at close enough range
for the signals to be visible. The following are a few of the more
common signs used every day by the Arunndta tribe:
To call the attention of a person, a man holds his open hand in
front of his face, with the palm towards him, and drops it again. The
sign having been acknowledged, a mutual exchange of gesture
correspondence is indulged in, by means of which an almost
inexhaustible number of ideas can be communicated in the form of
coherent “speech.” The following few signs (most of which are
sketched on page 391, Fig. 57) may be taken as typical of a
vocabulary which in its entirety is both elaborate and intricate:
Fig. 57. Sign-language of Arunndta tribe.

Which way? The forefinger is held erect (the other fingers


of the hand being closed) and several
times moved downwards or away from
the face (No. 1).
Big! The hand is held supine while the fingers are
first extended and then closed in again
upon the palm. The thumb is kept more
or less erect (No. 2).
Little! The hand is held with its palm upwards, and
the fingers moved towards the thumb,
the middle one touching it (No. 3).
To rest (“I am going The hand is held semi-prone, with the
to sit down”) fingers extended and lying closely
together, the thumb resting at the top. It
is jerked downwards two or three times
(No. 4).
To camp (“I am With the hand in the same position as in No.
going to camp”) 4, the middle, fourth, and fifth fingers
are closed in upon the palm, while the
index is made to touch the thumb (No.
5).
To camp (collectively) The hand remains in semi-prone position,
but with the forefinger pointing towards
the ground at half a right angle, and the
same three fingers as mentioned in No.
5 remaining closed (No. 6).
To walk The hand is held supine with only the
forefinger extended; it is moved from
side to side (No. 8).
Come this way Holding the hand nearly horizontal and semi-
quickly! prone, the fingers are closed in upon the
palm and the thumb placed over them;
in that position the hand is jerked
sideways (No. 7).
Listen! The hand, with fingers fully extended, is held
with its palm turned towards the face
and moved to and from the face a
number of times (No. 9).
Hear him! The hand is kept prone, with the middle and
fourth fingers closed and the remaining
three digits extended. Thus it is several
times jerked towards an ear of the man
who is “speaking” (No. 10).
What did you say? The hand is held erect, with the palm
towards the face; the thumb and
forefinger are kept straight, but the other
fingers flexed; the forefinger is jerked to
and fro (No. 11).
Fighting With the hand in an erect position, but the
palm turned away from the face, all
digits except the index are closed; the
last-named, moreover, is jerked with a
circular motion (No. 12).
On the summit of a The sign-maker extends his hand and with
Hill the small finger border of it taps the
crest of his head (No. 13).
Running up the Hill He extends the index and middle fingers
upwards, while the hand is more or less
prone, and jerks them up and down (No.
14).
It is no good! The hand in a prone position (with the index
and small fingers extended, the fourth
finger closed in, and the tips of the
thumb and middle finger touching) is
moved in a circle from the wrist (No.
15).
He is dead Change the position of hand from that of No.
7 to that of No. 16.
Moving up the Valley The hand is held partly extended in a prone
position and moved horizontally, from
side to side, at the side of the man’s
head (No. 17).
Behold the ears (of a He holds the hand in a vertical position with
Kangaroo) the index and middle lingers extended,
moving them in imitation of an animal’s
ears (No. 18).
Steady! Hold the hand in position indicated in No. 7
and move it vertically with a striking
action.
A Euro Much the same position as shown in No. 18,
but the index and middle fingers are
only semi-extended; the other fingers
are made to touch the thumb (No. 19).
An Echidna or The hand is held prone and horizontally, with
Native only the forefinger extended, which is
Porcupine moved regularly from side to side (No.
20).
An Opossum Hand semi-prone and horizontal; thumb
extended and forefinger bent in second
joint from the tip; the remaining fingers
closed in upon palm. The hand is
moved downwards (No. 21).
An Iguana Hand semi-prone and horizontal. The tip of
forefinger is brought nearly into contact
with that of thumb, the remaining fingers
being fully extended. The hand is
moved vertically up and down (No. 22).
A Carpet Snake Hand semi-prone and horizontal. The points
of the fingers are bunched against the
thumb, and circular movements made
with the hand (No. 23).
A Tiger Snake Hand semi-prone. The forefinger extended
and directed downwards at about half a
right angle; thumb extended; the
remaining three fingers are moved
alternately away from and towards the
palm of the hand (No. 24).
A Black Snake Hand semi-prone and horizontal. All fingers
closed in upon palm, thumb extended
upwards. Slight downward movements
(No. 25).
The Evil Spirit Hand in prone position and directed
downwards. Thumb, index and small
fingers extended, the remaining fingers
closed. Frequent down and up
movements (No. 26).
Native Pear Hand held erect; forefinger flexed to touch
(Marsdenia) the tip of thumb; the remaining fingers
closed (No. 27).
Native Orange Hand prone; index and middle fingers flexed;
(Capparis thumb, fourth and fifth fingers extended.
Mitchelli) Forward and slightly upward jerks (No.
28).
Marsupial Mice The hand is held in a supine position, while
the points of the fingers are grouped
around the thumb and jerked towards
the sign-maker’s mouth (No. 29).
Native Plum With all fingers semi-extended, the hand is
(Santalum) held in a supine position and jerked
towards the face (No. 30).
A Dingo or Wild Dog The hand is held in a vertical position, with
the forefinger extended; the other digits
are flexed, the tips of the middle finger
and thumb touching. Side movements
are made with the forefinger (No. 31).
A Bustard or Wild Hand prone; thumb, middle, fourth, and fifth
Turkey fingers bunched together, the forefinger
bent downwards. Vertical movements
(No. 32).
A Lizard Hand semi-prone; forefinger and thumb fully
extended, the remaining fingers flexed.
Upward and downward movements
from the wrist (No. 33).
A Crow The thumb, index and middle fingers
extended, the remaining fingers flexed.
The two extended fingers are moved
against each other (No. 34).
Wild Bee Honey The tip of the forefinger is repeatedly flicked
from the thumb, while the three
remaining fingers are kept in a flexed
position (No. 35).
A Wallaby Hand semi-prone; the forefinger and thumb
are extended while the remaining
fingers lie against the palm. Small,
jerking, upward movements are made
with the forefinger, in which the hand
also takes part (No. 36).
My Son The sign-maker taps his chin with the
extended forefinger of his hand.
My Sister’s Daughter He holds his right nipple between the thumb
and forefinger of his right hand.
My Sister’s He surrounds his right nipple with the tips of
Daughter’s his fingers and alternately opens and
Daughter closes them with a scratching action.

You might also like