Use the API to invoke a model with a single prompt - Amazon Bedrock

Run inference on a model through the API by sending an InvokeModel or InvokeModelWithResponseStream request. You can specify the media type for the request and response bodies in the contentType and accept fields. The default value for both fields is