In distributed systems, data may partially overlap in sample and feature spaces, that is, horizontal and vertical data partitioning. By combining horizontal and vertical federated learning (FL), hybrid FL emerges as a promising solution to simultaneously deal with data overlapping in both sample and feature spaces. Due to its decentralized nature, hybrid FL is vulnerable to model poisoning attacks, where malicious devices corrupt the global model by sending crafted model updates to the server. Existing work usually analyzes the statistical characteristics of all updates to resist model poisoning attacks. However, training local models in hybrid FL requires additional communication and computation steps, increasing the detection cost. In addition, due to data diversity in hybrid FL, solutions based on the assumption that malicious models are distinct from honest models may incorrectly classify honest ones as malicious, resulting in low accuracy. To this end, we propose a secure and efficient hybrid FL against model poisoning attacks. Specifically, we first identify two attacks to define how attackers manipulate local models in a harmful yet covert way. Then, we analyze the execution time and energy consumption in hybrid FL. Based on the analysis, we formulate an optimization problem to minimize training costs while guaranteeing accuracy considering the effect of attacks. To solve the formulated problem, we transform it into a Markov decision process and model it as a multiagent reinforcement learning (MARL) problem. Then, we propose a malicious device detection (MDD) method based on MARL to select honest devices to participate in training and improve efficiency. In addition, we propose an alternative poisoned model detection (PMD) method considering model change consistency. This method aims to prevent poisoned models from being used in the model aggregation. Experimental results validate that under the random local model poisoning attack, the proposed MDD method can save over 50% training costs while guaranteeing accuracy. When facing the advanced adaptive local model poisoning (ALMP) attack, utilizing both the proposed MDD and PMD methods achieves the desired accuracy while reducing execution time and energy consumption.