Receiving empty Json respond from micro service using pyton from QT.

  • Hello, I am new to using QT and I am working on it in my final year project. I am consuming an api made in python for facial recognition, to which I pass the path of an image and it returns the identification of the people present in it. The problem that I present is that when I consume the service, when I print the answer it comes out empty. From the same python if I managed to consume the api and get the response, converting it to json. Here is the api code in python and the one in Qt to consume the microservice.

    Code in Qt

    QString a="E:/fotos movil/carlos marx/IMG_2925.jpg";
    QVariantMap feed;
    feed.insert("ruta", QVariant(a).toString());
    QByteArray payload=QJsonDocument::fromVariant(feed).toJson();
    QUrl myurl;
    QNetworkRequest request;
    request.setHeader(QNetworkRequest::ContentTypeHead er, "application/json");
    QNetworkAccessManager *restclient;
    restclient = new QNetworkAccessManager(this);
    QNetworkReply *reply = restclient->post(request,payload);
    qDebug() << reply->readAll(); // it is empty

    api code in python

    import face_recognition
    import cv2
    import numpy as np
    from flask import Flask, jsonify
    import requests
    import json
    from flask import request
    app = Flask(__name__)
    def reconocer():
    # Load a sample picture and learn how to recognize it.
    s1_image = face_recognition.load_image_file("lena.png")
    s1_face_encoding = face_recognition.face_encodings(s1_image)[0]
    # # Load a second sample picture and learn how to recognize it.
    s2_image = face_recognition.load_image_file("vidal.jpg")
    s2_face_encoding = face_recognition.face_encodings(s2_image)[0]
    s3_image = face_recognition.load_image_file("viltres.jpg")
    s3_face_encoding = face_recognition.face_encodings(s3_image)[0]
    # Create arrays of known face encodings and their names
    known_face_encodings = [
    known_face_names = [
    # Initialize some variables
    face_locations = []
    face_encodings = []
    face_names = []
    frame = cv2.imread(request.json["ruta"])
    # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
    rgb_small_frame = small_frame[:, :, ::-1]
    # Only process every other frame of video to save time
    # Find all the faces and face encodings in the current frame of video
    face_locations = face_recognition.face_locations(rgb_small_frame)
    face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)
    face_names = []
    for face_encoding in face_encodings:
    # See if the face is a match for the known face(s)
    matches = face_recognition.compare_faces(known_face_encoding s, face_encoding)
    name = "Desconocido"
    # # If a match was found in known_face_encodings, just use the first one.
    # if True in matches:
    # first_match_index = matches.index(True)
    # name = known_face_names[first_match_index]
    # Or instead, use the known face with the smallest distance to the new face
    face_distances = face_recognition.face_distance(known_face_encoding s, face_encoding)
    best_match_index = np.argmin(face_distances)
    if matches[best_match_index]:
    name = known_face_names[best_match_index]
    p= {'personas': face_names}
    print (p)
    return jsonify(p)
    if __name__ == "__main__":'localhost', port='5001', threaded=True)

    I would appreciate if anyone can guide me on what I can do to access the answer that returns python through the dictionary. Taking into account that it was verified that the api is receiving the image path and is identifying correctly.

  • Lifetime Qt Champion

    Hi and welcome to devnet,

    QNetworkAccessManager is not synchronous as requests is. See its documentation. Use QNetworkReplay::finished and connect it to a slot for further processing of the data you got.

Log in to reply