Ask AI

Use the Ask AI activity to ask Al to generate a response to your questions based on informational instructions (including role, context, and constraints) to the system and through the user prompt. You can optionally recommend workflows to run and use the AI Agent activity to execute the workflows directly.

Usage

Complete the following properties to use this activity:

  • Request - Specify the following information or click the Variable Reference icon to choose a variable:

    • Model - Choose the OpenAI model that processes the request. The list of models is updated as new models become available. The default model is GPT-5.

    • System instructions - Enter the informational instructions (such as role, context, and constraints) used by the system to process the requests. This shapes the tone, focus, and constraints for all responses in this workflow. For example, act as a cybersecurity analyst.

    • User prompt - Enter the query or request you want the AI to process or respond during the interaction. You can reference from inputs or incident data to make the requests more dynamic. For example, paste in the incident details and prompt the AI to summarize the incident.

  • Workflow Recommendations - Specify the following information:

    • Workflow categories to include - Select the workflow categories to use during the execution of the workflow. Otherwise, no workflow categories are included. The AI agent waits for the workflows in the selected categories to complete. If the workflows do not finish by the defined timeout period, the agent will time out. Only Production Ready workflows in the selected workflow categories are included.

    • Specific workflows to include - Select specific workflows to use during the execution of the workflow, if applicable. Otherwise, no workflows are included. The AI agent waits for the selected workflows to complete. If the workflows do not finish by the defined timeout period, the agent will time out. Only Production Ready workflows are included.

  • Advanced Configuration - Specify the following information on click the Variable Reference icon to choose a variable:

    • Temperature (chat response variability) - Enter the variability of the AI responses, between 0 and 2, that controls creativity versus consistency in the AI answers. The lower the value, the output is more deterministic, focused, and consistent. The higher the value, there is an increase in the variability and creativity of the output. We recommend that you use a lower value for factual tasks. The default temperature is 1.

      • 0 - A precise variability that produces highly deterministic and focused responses, with minimal randomness. Best for scenarios where accuracy and predictability are critical. This can be used for fact-based outputs, summaries, or any context requiring strict adherence to the prompt.

      • 1 - A balance between creativity and focus. Outputs are still grounded but allow for some variety and flexibility. This can be used for general purpose conversations, moderate creativity, and informative responses.

      • 2 - A creative variability that encourages more creative and diverse outputs. It adds a level of randomness while still maintaining coherence. This can be used for brainstorming ideas, generating varied responses, or exploring multiple possibilities.

    • Max output tokens - Enter the maximum number of tokens used by the generative AI agent to restrict the length of the AI responses. In an typical OpenAI model, one token is equivalent to four characters. By default, the output token is set to 1000.

    • Previous response ID - Enter the identifier or name to reference an earlier activity to continue the conversation with AI, which augments the context that the AI uses to improve its responses.

    • Tool choice - Choose how the AI model selects and uses the external functions or APIs.

      • Auto - The model autonomously decides whether to call a tool or respond with a natural language message.

      • None - The model is prevented from calling any tools and it will only generate a text response.

      • Required - The model must call one or more of the provided tools.

    • Execute in the background - Click the toggle to enable the background mode that returns responses immediately and deletes the responses permanently. By default, the toggle is off and the deleted responses are retained briefly in a synchronized model.

    • Reasoning - This is only supported if the selected AI model is GPT-5, o1, o3, o3-mini, or o4-mini.

      • Effort - Choose the reasoning effort used by the AI agent in a response (High, Low, Medium, or Minimal). The lower the reasoning effort, the faster the responses and fewer tokens are used in the response.

      • Summary - Choose the level of detail in the generated reasoning summary by the AI agent (Auto, Concise, or Detailed). This is helpful in understanding the reasoning process and for debugging purposes.