The healthcare industry, stagnant in productivity and overburdened by administrative tasks, stands to benefit greatly from artificial intelligence (AI) tools. Understandably, AI in healthcare has been met with skepticism and regulatory scrutiny, especially in the face of potential ethical issues. This paper examines the ethical implications of AI in healthcare as it applies to three primary use cases: automating administrative tasks, augmenting clinical practice, and automatic specific care elements. First, recent organizational actions will be examined and critiqued. Then, the four main principles of bioethics will be discussed using clinical case examples. These patient scenarios will highlight specific use cases for AI in healthcare, challenges for ethical implementation, and their potential solutions. To achieve a more efficient, effective, and convenient health care system that benefits both patients and clinicians, deployment of AI in healthcare must be contained within ethical guardrails which remain flexible to evolve with the technology. This applies to patient consent, transparency in AI algorithms to the end user, regulatory oversight, liability framework, and dataset standards. Care should be taken to avoid comparing AI tools to unrealistic standard of perfection, but rather to the current, imperfect standard of care.