Machine (data-driven learning-based) decision making is increasingly being used to assist or replace human decision making in a variety of domains ranging from banking (rating user credit) and recruiting (ranking applicants) to judiciary (profiling criminals) and journalism (recommending news-stories). Recently concerns have been raised about the potential for discrimination and unfairness in such machine decisions. Against this background, in this talk, I will pose and attempt to answer the following high-level questions: (a) How do machines learn to make discriminatory or unfair decisions? (b) How can we quantify unfairness in machine decision making? (c) How can we control machine unfairness? i.e., can we design learning mechanisms that avoid unfair decision making? (d) Is there a cost to fair decision making?