menu
{ "item_title" : "Examining Vulnerabilities and Adversarial Exploitation of AI and LLMs", "item_author" : [" Puya Pakshad", "Marwan Omar "], "item_description" : "As AI systems and large language models (LLMs) become integrated into decision-making, communication, and automation workflows, their security becomes a pressing concern. Despite their performance, these models have vulnerabilities that can be exploited through adversarial techniques like prompt manipulation, data exploitation, and cyber-attacks. These exploits undermine system reliability while posing risks to privacy, misinformation, and safety. Examining the vulnerabilities of AI and LLMs, alongside methods used to exploit them, may further reveal limitations of current models and help develop more resilient, trustworthy AI systems. Examining Vulnerabilities and Adversarial Exploitation of AI and LLMs explores AI security, bridging governance, policy, compliance, and zero-trust strategy with AI-driven defense, detection, and engineering. It examines LLM vulnerabilities and security models, addressing responsible AI adoption, data privacy compliance, and global policy alignment. This book covers topics such as prompt manipulation, threat detection, and AI governance, and is a useful resource for engineers, policymakers, academicians, researchers, and scientists. ", "item_img_path" : "https://covers1.booksamillion.com/covers/bam/9/79/833/738/9798337382524_b.jpg", "price_data" : { "retail_price" : "235.00", "online_price" : "235.00", "our_price" : "235.00", "club_price" : "235.00", "savings_pct" : "0", "savings_amt" : "0.00", "club_savings_pct" : "0", "club_savings_amt" : "0.00", "discount_pct" : "10", "store_price" : "" } }
Examining Vulnerabilities and Adversarial Exploitation of AI and LLMs|Puya Pakshad

Examining Vulnerabilities and Adversarial Exploitation of AI and LLMs

local_shippingShip to Me
In Stock.
FREE Shipping for Club Members help

Other Available Formats

Hardcover
235.00
Paperback
$195.00

show all formats

Overview

As AI systems and large language models (LLMs) become integrated into decision-making, communication, and automation workflows, their security becomes a pressing concern. Despite their performance, these models have vulnerabilities that can be exploited through adversarial techniques like prompt manipulation, data exploitation, and cyber-attacks. These exploits undermine system reliability while posing risks to privacy, misinformation, and safety. Examining the vulnerabilities of AI and LLMs, alongside methods used to exploit them, may further reveal limitations of current models and help develop more resilient, trustworthy AI systems. Examining Vulnerabilities and Adversarial Exploitation of AI and LLMs explores AI security, bridging governance, policy, compliance, and zero-trust strategy with AI-driven defense, detection, and engineering. It examines LLM vulnerabilities and security models, addressing responsible AI adoption, data privacy compliance, and global policy alignment. This book covers topics such as prompt manipulation, threat detection, and AI governance, and is a useful resource for engineers, policymakers, academicians, researchers, and scientists.

This item is Non-Returnable

Details

  • ISBN-13: 9798337382524
  • ISBN-10: 9798337382524
  • Publisher: Igi Global Scientific Publishing
  • Publish Date: April 2026
  • Dimensions: 10 x 7 x 0.88 inches
  • Shipping Weight: 1.93 pounds
  • Page Count: 300

Related Categories

You May Also Like...

    1

BAM Customer Reviews