We propose a solution with edge image processing and long-range connectivity named AICropCAM that can be used in drones, ground platforms, or as distributed sensor networks for plant phenotyping. We have successfully run multiple image classification, segmentation, and object detection models on this platform. Classification models help classify images based on image quality, crop type, and phenological stage. Object detection models could detect and count the number of plants, weeds, and insects and expand to count the flowers, fruits, and leaves. Segmentation models can separate the canopy from the background and potentially segment traits that indicate the nutrient deficit or disease. Canopy segmentation results help estimate leaf area index and chlorophyll content. Because the models run sequentially, like a decision tree, there is flexibility to select the most accurate model considering the crop type and the crop’s phenological stage that helps scan fields with multiple crops. The generated information is geo-tagged and transmitted through low throughput long-range communication protocol (e.g., LoRa) to cloud data storage. AICropCAM reduces 2-megabyte image files to around 100-byte actionable data, resulting in massive savings in data storage and transmission costs. This edge image capturing and processing system is open to improvement with new neural network predictive models and faster edge computers. This system provides plant scientists and crop breeders a low-cost, flexible phenotyping tool to extract multiple crop traits related to abiotic and biotic stress responses.