Run inference on a model through the API by sending an InvokeModel or InvokeModelWithResponseStream request. To check if a model supports streaming, send a GetFoundationModel or ListFoundationModels request and check the value in the responseStreamingSupported