This case study reports on two first-semester programmings courses with more than 190 students. Both courses made use of automated assessments of students code submissions. We observed how students trick these systems by analyzing the version history of suspect submissions. By analyzing more than 3300 submissions we revealed four astonishingly simple cheat patterns (overfitting, evasion, redirection, and injection) that students can use to trick automated programming assignment assessment systems (APAAS) but we also propose corresponding countermeasures. This immaturity of existing APAAS solutions might have implications for courses that rely deeply on automation like MOOCs. Therefore, we conclude to look at APAAS solutions much more from security (code injection) point of views. Moreover, we identify the need to evolve existing unit testing frameworks into more evaluation-oriented teaching solutions that provide better cheat detection capabilities and differentiated grading support.