More sessions

This commit is contained in:
2026-02-24 18:03:22 +00:00
parent eb5bd4a929
commit 23cc78aa98
284 changed files with 129543 additions and 4 deletions

View File

@@ -0,0 +1 @@
/Users/bob/git/dxw/dalmatian

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,46 @@
{
"sessionId": "b1be52ea-a27e-458c-8e0d-0080f8fe0427",
"projectHash": "cc0b5b68454a59206e0457f5ad30cf64769ce03f6eaf19fb8c5b1adb80414a8d",
"startTime": "2026-02-10T17:03:35.491Z",
"lastUpdated": "2026-02-10T17:03:42.661Z",
"messages": [
{
"id": "e161f72c-c4c8-492c-bb15-7656bfe1fe25",
"timestamp": "2026-02-10T17:03:35.491Z",
"type": "user",
"content": "Your task is to answer the following question about Gemini CLI:\n<question>\nI have completed the task. How do I signal completion?\n</question>"
},
{
"id": "a23660a6-d13c-4826-ae90-f754334e8199",
"timestamp": "2026-02-10T17:03:42.661Z",
"type": "gemini",
"content": "To signal completion, you need to call the `complete_task` tool. This tool accepts a `report` argument, which is a JSON object. The `report` object must contain two keys:\n\n1. `answer`: A string containing the detailed answer to the user's question about Gemini CLI.\n2. `sources`: An array of strings listing the specific documentation files you used to formulate your answer.\n\nHere's an example of how you would call it:\n\n```python\ncomplete_task(\n report={\n \"answer\": \"This is your detailed answer to the question.\",\n \"sources\": [\"cli/commands.md\", \"cli/configuration.md\"]\n }\n)\n```\nThere are no internal documentation files directly related to how *I* (the agent) signal completion, as this is part of my operational instructions.",
"thoughts": [
{
"subject": "Clarifying Completion Signals",
"description": "I'm focusing on the user's need to submit a final answer. Currently, I'm solidifying the explanation of the `complete_task` tool. I'm structuring the `report` argument, highlighting the `answer` and `sources` components. I'll make sure the user understands how they're intended to work.",
"timestamp": "2026-02-10T17:03:39.590Z"
},
{
"subject": "Outlining Tool Usage",
"description": "I'm now formulating the answer for the user. I've concluded that the `complete_task` tool is the key. I won't need to consult internal docs; the instructions define my completion mechanism. My explanation will focus on the `complete_task` tool and its mandatory parameters. I'm focusing on the `answer` and `sources` fields within the `report` argument.",
"timestamp": "2026-02-10T17:03:41.190Z"
},
{
"subject": "Confirming Completion Strategy",
"description": "I'm now zeroing in on the user's explicit question and how I can best respond. My instructions are the primary resource here, explicitly defining the `complete_task` tool. The structure of the answer will center around the usage of this tool and the required `report` argument, including `answer` and `sources` fields. I am ready to generate the user's answer.",
"timestamp": "2026-02-10T17:03:42.395Z"
}
],
"tokens": {
"input": 3241,
"output": 334,
"cached": 0,
"thoughts": 414,
"tool": 0,
"total": 3989
},
"model": "gemini-2.5-flash"
}
]
}

View File

@@ -0,0 +1,98 @@
{
"sessionId": "b1be52ea-a27e-458c-8e0d-0080f8fe0427",
"projectHash": "cc0b5b68454a59206e0457f5ad30cf64769ce03f6eaf19fb8c5b1adb80414a8d",
"startTime": "2026-02-10T17:21:19.413Z",
"lastUpdated": "2026-02-10T17:33:07.836Z",
"messages": [
{
"id": "dc79b325-ae6d-4e37-b173-04c1caabb63e",
"timestamp": "2026-02-10T17:21:19.413Z",
"type": "user",
"content": "Your task is to answer the following question about Gemini CLI:\n<question>\nI have completed the task. How do I signal completion?\n</question>"
},
{
"id": "6a437301-97ce-4ae0-aba1-2d6e51be4370",
"timestamp": "2026-02-10T17:21:55.294Z",
"type": "gemini",
"content": "The Gemini CLI does not have a specific command for a user to \"signal completion\" of an arbitrary task.\n\nIn the context of the Gemini CLI, task completion is typically handled through:\n\n1. **Natural Language Interaction:** If the \"task\" is an instruction provided by the AI, you would typically respond in natural language within the chat interface, indicating that you have completed the requested action (e.g., \"I'm done with that,\" or \"File has been updated.\"). The AI model then processes this conversational input to understand the task's status.\n2. **Implicit Completion:** If the task involves executing a command, running a script, or modifying files, the completion is generally implicit. For example, successfully running a shell command via `!command` or applying changes via a tool would signify completion of that particular sub-task.\n3. **Session Management:** For managing conversation states, the `/chat save <tag>` command allows you to checkpoint a specific point in your conversation history. While not a generic \"task completion\" signal, it can mark the \"completion\" of a conversational phase, allowing you to return to it later with `/chat resume <tag>`.\n\nThere is no dedicated `/complete` or similar command to formally mark an arbitrary task as finished within the CLI's command structure.\n\n**Sources:**\n* cli/commands.md\n* cli/index.md\n* faq.md",
"thoughts": [
{
"subject": "Contemplating Completion Signaling",
"description": "I've been giving some thought to the user's implicit question: how do *I* signal task completion? It seems the focus is less on Gemini's mechanics and more about my operational procedure. The instruction to use the `complete_task` tool is clear. Now I'm considering the structured output.",
"timestamp": "2026-02-10T17:21:21.934Z"
},
{
"subject": "Defining the User's Intent",
"description": "I'm now zeroing in on the user's implicit question. It appears to be less about *my* completion signaling, and more about how the user interacts with the CLI. This shift in perspective is crucial, as the user likely means how *they* signal completion of *their* tasks within the Gemini CLI. I'm going to search the documentation for relevant CLI commands or configurations.",
"timestamp": "2026-02-10T17:21:23.971Z"
},
{
"subject": "Investigating Completion Methods",
"description": "I am now focusing on the user's implicit intention. Recognizing the role as a CLI Help Agent, I'm interpreting their question as pertaining to features within the Gemini CLI. The recent analysis suggests searching the documentation for relevant CLI commands like \"complete task\", \"signal completion\", and other related phrases. My next step will be to list the available documentation files to see if anything pertinent exists.",
"timestamp": "2026-02-10T17:21:24.965Z"
},
{
"subject": "Examining Completion Signals",
"description": "I'm currently delving into the Gemini CLI documentation, focusing on files such as `cli/commands.md`. My search terms include \"complete,\" \"finish,\" and \"completion\" to pinpoint potential task completion signals. I'm hoping to uncover clues about how the CLI informs the user when a process wraps up.",
"timestamp": "2026-02-10T17:21:28.740Z"
},
{
"subject": "Scouting for Clues",
"description": "I've begun searching the suggested files, starting with `cli/commands.md`. Initial scans for \"complete,\" \"finish,\" and similar terms yielded few promising leads. I'll broaden the search to include less direct terms like \"task\" and \"status.\" The current file is not giving me much to go on, so I'll need to explore others.",
"timestamp": "2026-02-10T17:21:29.577Z"
},
{
"subject": "Analyzing Command Functionality",
"description": "I've been scrutinizing the `cli/commands.md` file, focusing on commands related to task completion and session management. I've pinpointed commands linked to chatting, restoring files, session resumption, and skill manipulation. The \"/quit\" command also caught my attention. I'm now comparing these commands against the keywords I identified to grasp their core functions.",
"timestamp": "2026-02-10T17:21:38.453Z"
},
{
"subject": "Pinpointing Task Completion Signals",
"description": "My review of `cli/commands.md` has progressed. I've been seeking direct task completion signals, focusing on keywords like \"complete\" and \"finish.\" While commands like `/chat save` and `/restore` touch on related concepts like conversation state and undoing, no dedicated signal exists for a user to explicitly notify the system of a task's finalization. The CLI doesn't appear to offer an explicit completion command for the AI's directives.",
"timestamp": "2026-02-10T17:21:41.142Z"
},
{
"subject": "Assessing Absence of Signals",
"description": "My investigation expanded to `cli/index.md` and `faq.md`. My focus remained on finding a \"signal completion\" command, but the search proved fruitless. The conclusion aligns with my initial assessment. The Gemini CLI doesn't appear to offer a direct method for users to explicitly communicate task completion within the interface. Instead, the interaction relies on natural language and implied context. Now, I'm ready to craft an accurate response.",
"timestamp": "2026-02-10T17:21:43.508Z"
},
{
"subject": "Analyzing CLI Capabilities",
"description": "I've been poring over the `cli/index.md` file, focusing on how a user might signal task completion. My initial assessment suggests a gap: the current documentation highlights features but misses any explicit mechanisms for a user to directly communicate task completion to the CLI itself. This seems to be a key area to explore further.",
"timestamp": "2026-02-10T17:21:47.844Z"
},
{
"subject": "Scrutinizing Documentation Gaps",
"description": "I've examined the `faq.md` file, hoping to find alternative completion signaling methods. This was unsuccessful. The CLI implicitly manages tasks via conversation, saving history, and checkpointing, but lacks an explicit \"task complete\" command. My current conclusion: the CLI design favors an implicit task management model, where task completion is inferred from user actions and conversational context.",
"timestamp": "2026-02-10T17:21:49.483Z"
},
{
"subject": "Analyzing Documentation Scope",
"description": "I've just finished reviewing the provided documentation. My focus has been on pinpointing the document's boundaries. I've noted a glaring omission: It doesn't address the user's role in signaling task completion. I'm now cross-referencing this gap with other available resources.",
"timestamp": "2026-02-10T17:21:52.411Z"
},
{
"subject": "Synthesizing Command Absence",
"description": "I've examined the `faq.md`, `cli/commands.md`, and `cli/index.md` files. Confirmation: no command exists for a user to explicitly signal task completion. My next step will be to formulate a response explaining this absence and detailing the implicit ways Gemini CLI recognizes task completion, as I understand it.",
"timestamp": "2026-02-10T17:21:54.411Z"
}
],
"tokens": {
"input": 10449,
"output": 298,
"cached": 0,
"thoughts": 267,
"tool": 0,
"total": 11014
},
"model": "gemini-2.5-flash"
},
{
"id": "01881d72-8297-4e3a-9081-6fdea3188f65",
"timestamp": "2026-02-10T17:21:55.298Z",
"type": "user",
"content": "You have stopped calling tools without finishing. You have one final chance to complete the task with a short grace period. You MUST call `complete_task` immediately with your best answer and explain that your investigation was interrupted. Do not call any other tools."
}
],
"summary": "Get the CLI to signal task completion."
}

View File

@@ -0,0 +1,195 @@
{
"sessionId": "f0dab84e-2dc4-4528-a5a0-f1efc6849e73",
"projectHash": "cc0b5b68454a59206e0457f5ad30cf64769ce03f6eaf19fb8c5b1adb80414a8d",
"startTime": "2026-02-10T17:33:24.433Z",
"lastUpdated": "2026-02-20T15:14:08.759Z",
"messages": [
{
"id": "e64efd39-b7a6-4250-8011-7c1e1769583d",
"timestamp": "2026-02-10T17:33:24.433Z",
"type": "user",
"content": "write a GEMINI.md for this repo"
},
{
"id": "2887fc88-db69-4252-b25a-f3ad6a4b6af4",
"timestamp": "2026-02-10T17:33:41.323Z",
"type": "gemini",
"content": "I will now create a `GEMINI.md` file in the root of the repository, tailored to the specific tools and workflows of the Dalmatian project.\n\n```markdown\n# Dalmatian Project Guidelines (GEMINI.md)\n\n## Core Guidelines\n\n- **Persona:** Assume the user is a 30-year veteran system administrator. Skip explanations of basic concepts. Be direct, technical, and concise.\n- **Direct Action:** Edit files directly. Do not commit or branch autonomously; always ask for permission first.\n- **Code Comments:** Use them sparingly. Only explain complex \"why\" logic. Never explain \"what\" the code is doing.\n\n## Project Discovery\n\n- **Build/Setup:** `bundle exec rake setup:all`\n- **Testing:** \n - Primary: `scripts/bin/test` (checks formatting, linters, and validates tfvars).\n - SQL Munge: `scripts/bin/test-sql-munge` (PHP linter and tests).\n- **Language Stacks:**\n - **Ruby:** Primary language for infrastructure logic and CLI tools (`Gemfile`).\n - **Terraform:** Infrastructure as code (`terraform/`, `.terraform-version`).\n - **Python:** Used for some utilities (`requirements.txt`).\n - **PHP:** Used for SQL munging tools (`terraform/infrastructures/ecs/infrastructure_support/php-sql-munge-docker/`).\n - **Shell:** Various scripts in `scripts/`.\n\n## Standards & Testing\n\n- **Indentation:** Mimic local indentation (tabs vs spaces) and naming conventions exactly.\n- **Formatting:** \n - Terraform: Always run `terraform fmt`.\n - Ruby: Adhere to `.rubocop.yml` and `.standard.yml`.\n- **Testing:** Always write a test if a framework exists. Use RSpec for Ruby (`spec/`).\n- **ADRs:** Follow the Architecture Design Records process (`.adr-dir`, `docs/architecture-decisions/`). Use `adr-tools`.\n\n## Language Defaults\n\n- **Ruby:** Use `bundler`.\n- **Python:** Use `virtualenv`.\n- **PHP:** Adhere to `dxw/php-cs-fixer-config` standards as checked by `test-sql-munge`.\n\n## Git Workflow\n\n- **Branching:** \n - Feature branches are **required**.\n - Branch from `master` and keep branches short-lived.\n - Naming: `description` or `ticket-description` (e.g., `123-add-rds-cluster`).\n- **Commits:**\n - Use present tense, imperative mood (e.g., `add rds cluster`).\n - Summary <= 50 chars; wrap body at 72 chars.\n - Explain _why_ in the body.\n - Reference issue/ticket numbers if available.\n- **Safety:** Always run `git diff` before asking to commit. Force push is only permitted on private feature branches with `--force-with-lease`.\n\n## Git Worktrees\n\n- **Suitability:** This repo is suitable for `git worktree` for parallel tasks or frequent context switching.\n```\n\nI will write this to `GEMINI.md`.\n\nExplanation of the command: This command will create a new file named `GEMINI.md` in the root directory of the project, which will contain project-specific guidelines for me to follow during our interaction.",
"thoughts": [
{
"subject": "Examining Project Context",
"description": "I'm currently reviewing the project's architecture to grasp its nuances. Understanding the existing file structure is crucial before I can draft the new `GEMINI.md` file. I'll need to align its contents with the system's guidelines and the contextual information provided.",
"timestamp": "2026-02-10T17:33:27.296Z"
},
{
"subject": "Investigating File Structures",
"description": "I'm now examining the `dal matian` repo's architecture. The presence of `Gemfile`, `Rakefile`, and various Ruby files, alongside a `requirements.txt`, suggests a Ruby-based project with potential Python integration. I've also noted the `scripts/` directory, the `terraform/` directory, and a `.terraform-version` file. I'm focusing on the `README.md` to see about build and test instructions.",
"timestamp": "2026-02-10T17:33:29.613Z"
},
{
"subject": "Formulating `GEMINI.md` Content",
"description": "I'm now drafting the `GEMINI.md` file, using the global instructions and project specifics. The focus is on providing tailored guidelines. I've compiled testing, build, and linting commands from the `README.md` and related files. I'm also incorporating specific branching and contribution details from the `CONTRIBUTING.md` file to provide a focused, project-relevant `GEMINI.md`.",
"timestamp": "2026-02-10T17:33:31.673Z"
},
{
"subject": "Constructing Project Guidelines",
"description": "Now I'm crafting the project-specific `GEMINI.md`. I've distilled setup, testing, and linting commands from `README.md` and related files. I'm focusing on Ruby, Python, and PHP, noting the presence of each and their respective linting tools. The branching strategy is also being refined, focusing on feature branches against the `master` branch. I'm taking all this into consideration while adapting from the existing `../../../.gemini/GEMINI.md`.",
"timestamp": "2026-02-10T17:33:34.952Z"
},
{
"subject": "Synthesizing `GEMINI.md` Content",
"description": "Now I'm focusing on the specifics of this `dal matian` project while constructing `GEMINI.md`. I've distilled setup and testing commands and identified Ruby, Python, and PHP as the relevant language stacks. Key linting tools are noted, as is the feature-branch-against-master branching strategy. I'm leveraging the global `GEMINI.md` as the core template for this project-specific file.",
"timestamp": "2026-02-10T17:33:35.010Z"
}
],
"tokens": {
"input": 18705,
"output": 1411,
"cached": 0,
"thoughts": 1054,
"tool": 0,
"total": 21170
},
"model": "gemini-3-flash-preview",
"toolCalls": [
{
"id": "write_file-1770744821320-7af911be1b259",
"name": "write_file",
"args": {
"content": "# Dalmatian Project Guidelines (GEMINI.md)\n\n## Core Guidelines\n\n- **Persona:** Assume the user is a 30-year veteran system administrator. Skip explanations of basic concepts. Be direct, technical, and concise.\n- **Direct Action:** Edit files directly. Do not commit or branch autonomously; always ask for permission first.\n- **Code Comments:** Use them sparingly. Only explain complex \"why\" logic. Never explain \"what\" the code is doing.\n\n## Project Discovery\n\n- **Build/Setup:** `bundle exec rake setup:all`\n- **Testing:** \n - Primary: `scripts/bin/test` (checks formatting, linters, and validates tfvars).\n - SQL Munge: `scripts/bin/test-sql-munge` (PHP linter and tests).\n- **Language Stacks:**\n - **Ruby:** Primary language for infrastructure logic and CLI tools (`Gemfile`).\n - **Terraform:** Infrastructure as code (`terraform/`, `.terraform-version`).\n - **Python:** Used for some utilities (`requirements.txt`).\n - **PHP:** Used for SQL munging tools (`terraform/infrastructures/ecs/infrastructure_support/php-sql-munge-docker/`).\n - **Shell:** Various scripts in `scripts/`.\n\n## Standards & Testing\n\n- **Indentation:** Mimic local indentation (tabs vs spaces) and naming conventions exactly.\n- **Formatting:** \n - Terraform: Always run `terraform fmt`.\n - Ruby: Adhere to `.rubocop.yml` and `.standard.yml`.\n- **Testing:** Always write a test if a framework exists. Use RSpec for Ruby (`spec/`).\n- **ADRs:** Follow the Architecture Design Records process (`.adr-dir`, `docs/architecture-decisions/`). Use `adr-tools`.\n\n## Language Defaults\n\n- **Ruby:** Use `bundler`.\n- **Python:** Use `virtualenv`.\n- **PHP:** Adhere to `dxw/php-cs-fixer-config` standards as checked by `test-sql-munge`.\n\n## Git Workflow\n\n- **Branching:** \n - Feature branches are **required**.\n - Branch from `master` and keep branches short-lived.\n - Naming: `description` or `ticket-description` (e.g., `123-add-rds-cluster`).\n- **Commits:**\n - Use present tense, imperative mood (e.g., `add rds cluster`).\n - Summary <= 50 chars; wrap body at 72 chars.\n - Explain _why_ in the body.\n - Reference issue/ticket numbers if available.\n- **Safety:** Always run `git diff` before asking to commit. Force push is only permitted on private feature branches with `--force-with-lease`.\n\n## Git Worktrees\n\n- **Suitability:** This repo is suitable for `git worktree` for parallel tasks or frequent context switching.\n",
"file_path": "GEMINI.md"
},
"result": [
{
"functionResponse": {
"id": "write_file-1770744821320-7af911be1b259",
"name": "write_file",
"response": {
"error": "[Operation Cancelled] Reason: User denied execution."
}
}
}
],
"status": "cancelled",
"timestamp": "2026-02-10T17:35:27.782Z",
"resultDisplay": {
"fileDiff": "Index: GEMINI.md\n===================================================================\n--- GEMINI.md\tCurrent\n+++ GEMINI.md\tProposed\n@@ -0,0 +1,52 @@\n+# Dalmatian Project Guidelines (GEMINI.md)\n+\n+## Core Guidelines\n+\n+- **Persona:** Assume the user is a 30-year veteran system administrator. Skip explanations of basic concepts. Be direct, technical, and concise.\n+- **Direct Action:** Edit files directly. Do not commit or branch autonomously; always ask for permission first.\n+- **Code Comments:** Use them sparingly. Only explain complex \"why\" logic. Never explain \"what\" the code is doing.\n+\n+## Project Discovery\n+\n+- **Build/Setup:** `bundle exec rake setup:all`\n+- **Testing:** \n+ - Primary: `scripts/bin/test` (checks formatting, linters, and validates tfvars).\n+ - SQL Munge: `scripts/bin/test-sql-munge` (PHP linter and tests).\n+- **Language Stacks:**\n+ - **Ruby:** Primary language for infrastructure logic and CLI tools (`Gemfile`).\n+ - **Terraform:** Infrastructure as code (`terraform/`, `.terraform-version`).\n+ - **Python:** Used for some utilities (`requirements.txt`).\n+ - **PHP:** Used for SQL munging tools (`terraform/infrastructures/ecs/infrastructure_support/php-sql-munge-docker/`).\n+ - **Shell:** Various scripts in `scripts/`.\n+\n+## Standards & Testing\n+\n+- **Indentation:** Mimic local indentation (tabs vs spaces) and naming conventions exactly.\n+- **Formatting:** \n+ - Terraform: Always run `terraform fmt`.\n+ - Ruby: Adhere to `.rubocop.yml` and `.standard.yml`.\n+- **Testing:** Always write a test if a framework exists. Use RSpec for Ruby (`spec/`).\n+- **ADRs:** Follow the Architecture Design Records process (`.adr-dir`, `docs/architecture-decisions/`). Use `adr-tools`.\n+\n+## Language Defaults\n+\n+- **Ruby:** Use `bundler`.\n+- **Python:** Use `virtualenv`.\n+- **PHP:** Adhere to `dxw/php-cs-fixer-config` standards as checked by `test-sql-munge`.\n+\n+## Git Workflow\n+\n+- **Branching:** \n+ - Feature branches are **required**.\n+ - Branch from `master` and keep branches short-lived.\n+ - Naming: `description` or `ticket-description` (e.g., `123-add-rds-cluster`).\n+- **Commits:**\n+ - Use present tense, imperative mood (e.g., `add rds cluster`).\n+ - Summary <= 50 chars; wrap body at 72 chars.\n+ - Explain _why_ in the body.\n+ - Reference issue/ticket numbers if available.\n+- **Safety:** Always run `git diff` before asking to commit. Force push is only permitted on private feature branches with `--force-with-lease`.\n+\n+## Git Worktrees\n+\n+- **Suitability:** This repo is suitable for `git worktree` for parallel tasks or frequent context switching.\n",
"fileName": "GEMINI.md",
"filePath": "/Users/bob/git/dxw/dalmatian/GEMINI.md",
"originalContent": "",
"newContent": "# Dalmatian Project Guidelines (GEMINI.md)\n\n## Core Guidelines\n\n- **Persona:** Assume the user is a 30-year veteran system administrator. Skip explanations of basic concepts. Be direct, technical, and concise.\n- **Direct Action:** Edit files directly. Do not commit or branch autonomously; always ask for permission first.\n- **Code Comments:** Use them sparingly. Only explain complex \"why\" logic. Never explain \"what\" the code is doing.\n\n## Project Discovery\n\n- **Build/Setup:** `bundle exec rake setup:all`\n- **Testing:** \n - Primary: `scripts/bin/test` (checks formatting, linters, and validates tfvars).\n - SQL Munge: `scripts/bin/test-sql-munge` (PHP linter and tests).\n- **Language Stacks:**\n - **Ruby:** Primary language for infrastructure logic and CLI tools (`Gemfile`).\n - **Terraform:** Infrastructure as code (`terraform/`, `.terraform-version`).\n - **Python:** Used for some utilities (`requirements.txt`).\n - **PHP:** Used for SQL munging tools (`terraform/infrastructures/ecs/infrastructure_support/php-sql-munge-docker/`).\n - **Shell:** Various scripts in `scripts/`.\n\n## Standards & Testing\n\n- **Indentation:** Mimic local indentation (tabs vs spaces) and naming conventions exactly.\n- **Formatting:** \n - Terraform: Always run `terraform fmt`.\n - Ruby: Adhere to `.rubocop.yml` and `.standard.yml`.\n- **Testing:** Always write a test if a framework exists. Use RSpec for Ruby (`spec/`).\n- **ADRs:** Follow the Architecture Design Records process (`.adr-dir`, `docs/architecture-decisions/`). Use `adr-tools`.\n\n## Language Defaults\n\n- **Ruby:** Use `bundler`.\n- **Python:** Use `virtualenv`.\n- **PHP:** Adhere to `dxw/php-cs-fixer-config` standards as checked by `test-sql-munge`.\n\n## Git Workflow\n\n- **Branching:** \n - Feature branches are **required**.\n - Branch from `master` and keep branches short-lived.\n - Naming: `description` or `ticket-description` (e.g., `123-add-rds-cluster`).\n- **Commits:**\n - Use present tense, imperative mood (e.g., `add rds cluster`).\n - Summary <= 50 chars; wrap body at 72 chars.\n - Explain _why_ in the body.\n - Reference issue/ticket numbers if available.\n- **Safety:** Always run `git diff` before asking to commit. Force push is only permitted on private feature branches with `--force-with-lease`.\n\n## Git Worktrees\n\n- **Suitability:** This repo is suitable for `git worktree` for parallel tasks or frequent context switching.\n"
},
"displayName": "WriteFile",
"description": "Writes content to a specified file in the local filesystem.\n\n The user has the ability to modify `content`. If modified, this will be stated in the response.",
"renderOutputAsMarkdown": true
}
]
},
{
"id": "9676f3f0-00f5-4946-8419-dfaf06fc955e",
"timestamp": "2026-02-10T17:35:27.783Z",
"type": "info",
"content": "Request cancelled."
},
{
"id": "54aa7302-9bf3-45d2-b170-5f38d5739716",
"timestamp": "2026-02-10T17:35:52.818Z",
"type": "user",
"content": "dont add my global context only things relevant to this repo"
},
{
"id": "24330c74-04a0-4a7f-ae3d-7ad3c3a066da",
"timestamp": "2026-02-10T17:36:00.056Z",
"type": "gemini",
"content": "I will now create a `GEMINI.md` file containing only the repository-specific guidelines for Dalmatian.\n\n```markdown\n# Dalmatian Project Guidelines (GEMINI.md)\n\n## Project Discovery\n\n- **Build/Setup:** `bundle exec rake setup:all`\n- **Testing:** \n - Infrastructure: `scripts/bin/test` (checks formatting, linters, and validates tfvars).\n - SQL Munge: `scripts/bin/test-sql-munge` (PHP linter and tests).\n - Ruby: `bundle exec rspec` (Standard RSpec suite).\n- **Language Stacks:**\n - **Ruby:** Infrastructure logic and CLI tools (`Gemfile`).\n - **Terraform:** Infrastructure as code (`terraform/`).\n - **Python:** Utilities (`requirements.txt`).\n - **PHP:** SQL munging tools (`terraform/infrastructures/ecs/infrastructure_support/php-sql-munge-docker/`).\n\n## Standards & Testing\n\n- **Architecture Decisions:** Follow the ADR process. Records are stored in `docs/architecture-decisions/`.\n- **Formatting:** \n - Terraform: Always run `terraform fmt`.\n - PHP: Adhere to `dxw/php-cs-fixer-config` (checked by `test-sql-munge`).\n- **Ruby Conventions:** Adhere to `.rubocop.yml` and `.standard.yml`.\n\n## Infrastructure Configuration\n\n- Configurations are defined in `dalmatian.yml` (see `dalmatian.yml.example`).\n- Supports `standalone` and `overloaded` (remote sources) cluster descriptions.\n- Remote sources are cached locally via `bundle exec rake dalmatian:fetch`.\n```\n\nExplanation: This command creates a `GEMINI.md` file in the root directory, populated with specific discovery, testing, and standard information derived from the Dalmatian repository.",
"thoughts": [
{
"subject": "Analyzing Local Scope",
"description": "\\n\\n\n\nI'm currently focused on distinguishing the local context from broader, global instructions. My prior attempt missed the mark by including extraneous details, like \"Persona\" and \"Directives.\" The aim is to generate a `GEMINI.md` file purely for this repository, free of any inherited or overarching instructions. I'm working to filter out irrelevant information and isolate the repository-specific elements.",
"timestamp": "2026-02-10T17:35:55.269Z"
},
{
"subject": "Defining Repository Context",
"description": "\\n\\n\n\nMy primary focus is now on refining the content. I've successfully identified key repository-specific details, such as the project's purpose, build instructions, and testing procedures. My next step is to structure this information clearly within the `GEMINI.md` file, omitting any global instructions or standard practices. I'm aiming for concise, actionable content that is directly relevant to this Dalmatian project, not general guidelines.",
"timestamp": "2026-02-10T17:35:56.480Z"
}
],
"tokens": {
"input": 20137,
"output": 776,
"cached": 0,
"thoughts": 258,
"tool": 0,
"total": 21171
},
"model": "gemini-3-flash-preview",
"toolCalls": [
{
"id": "write_file-1770744960054-5bcc75fc53d5e",
"name": "write_file",
"args": {
"content": "# Dalmatian Project Guidelines (GEMINI.md)\n\n## Project Discovery\n\n- **Build/Setup:** `bundle exec rake setup:all`\n- **Testing:** \n - Infrastructure: `scripts/bin/test` (checks formatting, linters, and validates tfvars).\n - SQL Munge: `scripts/bin/test-sql-munge` (PHP linter and tests).\n - Ruby: `bundle exec rspec` (Standard RSpec suite).\n- **Language Stacks:**\n - **Ruby:** Infrastructure logic and CLI tools (`Gemfile`).\n - **Terraform:** Infrastructure as code (`terraform/`).\n - **Python:** Utilities (`requirements.txt`).\n - **PHP:** SQL munging tools (`terraform/infrastructures/ecs/infrastructure_support/php-sql-munge-docker/`).\n\n## Standards & Testing\n\n- **Architecture Decisions:** Follow the ADR process. Records are stored in `docs/architecture-decisions/`.\n- **Formatting:** \n - Terraform: Always run `terraform fmt`.\n - PHP: Adhere to `dxw/php-cs-fixer-config` (checked by `test-sql-munge`).\n- **Ruby Conventions:** Adhere to `.rubocop.yml` and `.standard.yml`.\n\n## Infrastructure Configuration\n\n- Configurations are defined in `dalmatian.yml` (see `dalmatian.yml.example`).\n- Supports `standalone` and `overloaded` (remote sources) cluster descriptions.\n- Remote sources are cached locally via `bundle exec rake dalmatian:fetch`.\n",
"file_path": "GEMINI.md"
},
"result": [
{
"functionResponse": {
"id": "write_file-1770744960054-5bcc75fc53d5e",
"name": "write_file",
"response": {
"output": "Successfully created and wrote to new file: /Users/bob/git/dxw/dalmatian/GEMINI.md."
}
}
}
],
"status": "success",
"timestamp": "2026-02-10T17:37:48.847Z",
"resultDisplay": {
"fileDiff": "Index: GEMINI.md\n===================================================================\n--- GEMINI.md\tOriginal\n+++ GEMINI.md\tWritten\n@@ -0,0 +1,28 @@\n+# Dalmatian Project Guidelines (GEMINI.md)\n+\n+## Project Discovery\n+\n+- **Build/Setup:** `bundle exec rake setup:all`\n+- **Testing:** \n+ - Infrastructure: `scripts/bin/test` (checks formatting, linters, and validates tfvars).\n+ - SQL Munge: `scripts/bin/test-sql-munge` (PHP linter and tests).\n+ - Ruby: `bundle exec rspec` (Standard RSpec suite).\n+- **Language Stacks:**\n+ - **Ruby:** Infrastructure logic and CLI tools (`Gemfile`).\n+ - **Terraform:** Infrastructure as code (`terraform/`).\n+ - **Python:** Utilities (`requirements.txt`).\n+ - **PHP:** SQL munging tools (`terraform/infrastructures/ecs/infrastructure_support/php-sql-munge-docker/`).\n+\n+## Standards & Testing\n+\n+- **Architecture Decisions:** Follow the ADR process. Records are stored in `docs/architecture-decisions/`.\n+- **Formatting:** \n+ - Terraform: Always run `terraform fmt`.\n+ - PHP: Adhere to `dxw/php-cs-fixer-config` (checked by `test-sql-munge`).\n+- **Ruby Conventions:** Adhere to `.rubocop.yml` and `.standard.yml`.\n+\n+## Infrastructure Configuration\n+\n+- Configurations are defined in `dalmatian.yml` (see `dalmatian.yml.example`).\n+- Supports `standalone` and `overloaded` (remote sources) cluster descriptions.\n+- Remote sources are cached locally via `bundle exec rake dalmatian:fetch`.\n",
"fileName": "GEMINI.md",
"filePath": "/Users/bob/git/dxw/dalmatian/GEMINI.md",
"originalContent": "",
"newContent": "# Dalmatian Project Guidelines (GEMINI.md)\n\n## Project Discovery\n\n- **Build/Setup:** `bundle exec rake setup:all`\n- **Testing:** \n - Infrastructure: `scripts/bin/test` (checks formatting, linters, and validates tfvars).\n - SQL Munge: `scripts/bin/test-sql-munge` (PHP linter and tests).\n - Ruby: `bundle exec rspec` (Standard RSpec suite).\n- **Language Stacks:**\n - **Ruby:** Infrastructure logic and CLI tools (`Gemfile`).\n - **Terraform:** Infrastructure as code (`terraform/`).\n - **Python:** Utilities (`requirements.txt`).\n - **PHP:** SQL munging tools (`terraform/infrastructures/ecs/infrastructure_support/php-sql-munge-docker/`).\n\n## Standards & Testing\n\n- **Architecture Decisions:** Follow the ADR process. Records are stored in `docs/architecture-decisions/`.\n- **Formatting:** \n - Terraform: Always run `terraform fmt`.\n - PHP: Adhere to `dxw/php-cs-fixer-config` (checked by `test-sql-munge`).\n- **Ruby Conventions:** Adhere to `.rubocop.yml` and `.standard.yml`.\n\n## Infrastructure Configuration\n\n- Configurations are defined in `dalmatian.yml` (see `dalmatian.yml.example`).\n- Supports `standalone` and `overloaded` (remote sources) cluster descriptions.\n- Remote sources are cached locally via `bundle exec rake dalmatian:fetch`.\n",
"diffStat": {
"model_added_lines": 28,
"model_removed_lines": 0,
"model_added_chars": 1227,
"model_removed_chars": 0,
"user_added_lines": 0,
"user_removed_lines": 0,
"user_added_chars": 0,
"user_removed_chars": 0
},
"isNewFile": true
},
"displayName": "WriteFile",
"description": "Writes content to a specified file in the local filesystem.\n\n The user has the ability to modify `content`. If modified, this will be stated in the response.",
"renderOutputAsMarkdown": true
}
]
},
{
"id": "5ef8e27f-6a4e-46a8-8e92-996df2828e8c",
"timestamp": "2026-02-10T17:37:51.797Z",
"type": "gemini",
"content": "The `GEMINI.md` file has been created with repository-specific discovery, standards, and configuration details.",
"thoughts": [
{
"subject": "Confirming Task Completion",
"description": "I've generated the `GEMINI.md` file, incorporating all repository-specific details. My next step is to determine if the user's requirements have been fully addressed. I'm seeking confirmation to close this task, as the primary objective has been met.",
"timestamp": "2026-02-10T17:37:51.795Z"
}
],
"tokens": {
"input": 20940,
"output": 23,
"cached": 17080,
"thoughts": 64,
"tool": 0,
"total": 21027
},
"model": "gemini-3-flash-preview"
}
],
"summary": "Create a GEMINI.md for the repo."
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

191
tmp/dalmatian/logs.json Normal file
View File

@@ -0,0 +1,191 @@
[
{
"sessionId": "b1be52ea-a27e-458c-8e0d-0080f8fe0427",
"messageId": 0,
"type": "user",
"message": "add a optional feature for the sql backup s3 buckets to set up replication to a bucket in another AWS account. Configure cross-account replication. Specify destination bucket ARN and IAM role. Implement error handling for replication failures. Add unit tests for replication configuration. \nconsider that we need to deal with encryption of the objects as well.",
"timestamp": "2026-02-10T16:55:47.726Z"
},
{
"sessionId": "b1be52ea-a27e-458c-8e0d-0080f8fe0427",
"messageId": 1,
"type": "user",
"message": "create the commits you want on a new branch.",
"timestamp": "2026-02-10T17:05:08.187Z"
},
{
"sessionId": "b1be52ea-a27e-458c-8e0d-0080f8fe0427",
"messageId": 2,
"type": "user",
"message": "/help",
"timestamp": "2026-02-10T17:06:37.263Z"
},
{
"sessionId": "c48984cb-ad6c-4bac-b618-2b58f65db3b7",
"messageId": 0,
"type": "user",
"message": "/resume",
"timestamp": "2026-02-10T17:16:20.376Z"
},
{
"sessionId": "c48984cb-ad6c-4bac-b618-2b58f65db3b7",
"messageId": 1,
"type": "user",
"message": "Make this work when replicating to buckets in anohter AWS account. Configure cross-account IAM roles and S3 bucket policies. Ensure KMS key policies permit cross-account access if encryption is used. Validate replication settings for destination account.",
"timestamp": "2026-02-10T17:17:56.617Z"
},
{
"sessionId": "c48984cb-ad6c-4bac-b618-2b58f65db3b7",
"messageId": 2,
"type": "user",
"message": "write out those docs dro ther external account. maybe add a comment to the code as well.",
"timestamp": "2026-02-10T17:22:39.560Z"
},
{
"sessionId": "f0dab84e-2dc4-4528-a5a0-f1efc6849e73",
"messageId": 0,
"type": "user",
"message": "write a GEMINI.md for this repo",
"timestamp": "2026-02-10T17:33:22.928Z"
},
{
"sessionId": "f0dab84e-2dc4-4528-a5a0-f1efc6849e73",
"messageId": 1,
"type": "user",
"message": "dont add my global context only things relevant to this repo",
"timestamp": "2026-02-10T17:35:47.284Z"
},
{
"sessionId": "cebedcb4-6673-4a0e-bf61-a131af573337",
"messageId": 0,
"type": "user",
"message": "/resume",
"timestamp": "2026-02-20T15:14:30.079Z"
},
{
"sessionId": "cebedcb4-6673-4a0e-bf61-a131af573337",
"messageId": 1,
"type": "user",
"message": "add sql backup replication options to the example dalmatian.yml",
"timestamp": "2026-02-20T15:15:05.025Z"
},
{
"sessionId": "cebedcb4-6673-4a0e-bf61-a131af573337",
"messageId": 2,
"type": "user",
"message": "add a feature to @scripts/bin/test so that only setup isnt run and we only run the tests that dont call `rake dalmatian` commands",
"timestamp": "2026-02-20T15:47:19.566Z"
},
{
"sessionId": "cebedcb4-6673-4a0e-bf61-a131af573337",
"messageId": 3,
"type": "user",
"message": "add the aurora tests",
"timestamp": "2026-02-20T15:49:35.764Z"
},
{
"sessionId": "cebedcb4-6673-4a0e-bf61-a131af573337",
"messageId": 4,
"type": "user",
"message": "git st",
"timestamp": "2026-02-20T16:07:54.239Z"
},
{
"sessionId": "cebedcb4-6673-4a0e-bf61-a131af573337",
"messageId": 5,
"type": "user",
"message": "ensure tha the dalmatain.yml.example is the same for RDS and aurora and then make the aurora tests work.",
"timestamp": "2026-02-20T16:09:02.302Z"
},
{
"sessionId": "cebedcb4-6673-4a0e-bf61-a131af573337",
"messageId": 6,
"type": "user",
"message": "ensure tha the dalmatain.yml.example is the same for RDS and aurora\n and then make the aurora tests work.",
"timestamp": "2026-02-20T16:09:16.281Z"
},
{
"sessionId": "cebedcb4-6673-4a0e-bf61-a131af573337",
"messageId": 7,
"type": "user",
"message": "/model",
"timestamp": "2026-02-20T16:18:03.181Z"
},
{
"sessionId": "cebedcb4-6673-4a0e-bf61-a131af573337",
"messageId": 8,
"type": "user",
"message": "try and fix the aurora tests",
"timestamp": "2026-02-20T16:18:25.908Z"
},
{
"sessionId": "cebedcb4-6673-4a0e-bf61-a131af573337",
"messageId": 9,
"type": "user",
"message": "git st",
"timestamp": "2026-02-20T16:35:35.335Z"
},
{
"sessionId": "cebedcb4-6673-4a0e-bf61-a131af573337",
"messageId": 10,
"type": "user",
"message": "remove the aurora tests",
"timestamp": "2026-02-20T16:35:52.923Z"
},
{
"sessionId": "cebedcb4-6673-4a0e-bf61-a131af573337",
"messageId": 11,
"type": "user",
"message": "remove the aurora tests",
"timestamp": "2026-02-20T16:36:05.180Z"
},
{
"sessionId": "b86444e1-3f32-4f39-9c46-1ce1d2631b8b",
"messageId": 0,
"type": "user",
"message": "fix tests due to code changes: Identify failing tests. Analyze failure logs. Implement precise code corrections. Verify test suite pass. user ./scripts/bin/test -l to test",
"timestamp": "2026-02-20T17:08:06.183Z"
},
{
"sessionId": "630ce049-b5a9-45d9-a5c3-385de197ea93",
"messageId": 0,
"type": "user",
"message": "can we avoid making the sql backup bucket be versioned unless needed by enabling replication. \n # aws_s3_bucket_versioning.sql_backups will be created\n + resource \"aws_s3_bucket_versioning\" \"sql_backups\" {\n + bucket = \"dxw-pentest-dxwpentestpentestvonestaging-sql-backup\"\n + id = (known after apply)\n\n + versioning_configuration {\n + mfa_delete = (known after apply)\n + status = \"Suspended\"\n }\n }",
"timestamp": "2026-02-20T17:41:59.769Z"
},
{
"sessionId": "630ce049-b5a9-45d9-a5c3-385de197ea93",
"messageId": 1,
"type": "user",
"message": "use `./scripts/bin/test/ -l` for a quick test of code changes. you can also use `./scripts/bin/test -i dxw-pentest -T rds` to see what terraform would plan for a real environmnet",
"timestamp": "2026-02-20T17:46:09.307Z"
},
{
"sessionId": "630ce049-b5a9-45d9-a5c3-385de197ea93",
"messageId": 2,
"type": "user",
"message": "edit dalmtian.yml to replicate the dxw-pentest sql backup to wai4vub1-mtvh-replication-test in aws account 893439696758",
"timestamp": "2026-02-20T17:53:04.101Z"
},
{
"sessionId": "630ce049-b5a9-45d9-a5c3-385de197ea93",
"messageId": 3,
"type": "user",
"message": "/directory add ../dalmatian-config",
"timestamp": "2026-02-20T17:54:32.537Z"
},
{
"sessionId": "630ce049-b5a9-45d9-a5c3-385de197ea93",
"messageId": 4,
"type": "user",
"message": "edit dalmtian.yml to replicate the dxw-pentest sql backup to wai4vub1-mtvh-replication-test in aws account 893439696758",
"timestamp": "2026-02-20T17:55:00.146Z"
},
{
"sessionId": "630ce049-b5a9-45d9-a5c3-385de197ea93",
"messageId": 5,
"type": "user",
"message": "update @GEMINI.md",
"timestamp": "2026-02-20T18:01:12.986Z"
}
]

View File

@@ -0,0 +1,3 @@
ensure tha the dalmatain.yml.example is the same for RDS and aurora and then make the aurora tests work.
git st
remove the aurora tests

View File

@@ -0,0 +1,42 @@
Found 20 matches for pattern "check_sql_backup_scheduled_task_environment_variables" in path ".":
---
File: lib/dalmatian/rds.rb
L101: def check_sql_backup_scheduled_task_environment_variables
L102: reference["check_sql_backup_scheduled_task_environment_variables"] || []
L156: "check_sql_backup_scheduled_task_environment_variables" => check_sql_backup_scheduled_task_environment_variables,
---
File: lib/dalmatian/rds_test.rb
L47: "check_sql_backup_scheduled_task_environment_variables" => rds.check_sql_backup_scheduled_task_environment_variables,
---
File: spec/unit/rds_spec.rb
L33: "check_sql_backup_scheduled_task_environment_variables" => [
L165: describe "#check_sql_backup_scheduled_task_environment_variables" do
L166: it "uses the rds check_sql_backup_scheduled_task_environment_variables" do
L167: expect(rds.check_sql_backup_scheduled_task_environment_variables).to eq([{"name" => "foo", "value" => "bar"}])
L234: "check_sql_backup_scheduled_task_environment_variables" => [
---
File: spec/unit/rds_test_spec.rb
L57: check_sql_backup_scheduled_task_environment_variables: [
L174: "check_sql_backup_scheduled_task_environment_variables" => [
---
File: spec/integration/deploys_local_configuration_spec.rb
L427: "check_sql_backup_scheduled_task_environment_variables" => [
---
File: spec/integration/tests_local_configuration_spec.rb
L400: "check_sql_backup_scheduled_task_environment_variables" => [
L533: "check_sql_backup_scheduled_task_environment_variables" => [
L2021: "check_sql_backup_scheduled_task_environment_variables" => [
L2154: "check_sql_backup_scheduled_task_environment_variables" => [
---
File: dalmatian.yml.example
L144: check_sql_backup_scheduled_task_environment_variables:
---
File: terraform/infrastructures/rds/variables.tf
L64: check_sql_backup_scheduled_task_environment_variables = list(map(string))
---
File: terraform/infrastructures/rds/check_sql_backups_scheduled_task.tf
L70: environment = jsonencode(local.rds_check_sql_backup_scheduled_task_environment_variables)
---
File: terraform/infrastructures/rds/locals.tf
L19: rds_check_sql_backup_scheduled_task_environment_variables = var.rds["check_sql_backup_scheduled_task_environment_variables"]
---

View File

@@ -0,0 +1,7 @@
Found 3 matches for pattern "cluster1_plan_cmd_production_aurora" in path ".":
---
File: spec/integration/tests_local_configuration_spec.rb
L688: let(:cluster1_plan_cmd_production_aurora) do
L3362: allow(Helper).to receive(:run!).with(cluster1_plan_cmd_production_aurora)
L3487: expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_production_aurora)
---

View File

@@ -0,0 +1,130 @@
Found 100 matches for pattern "Aurora" in path "." (results limited to 100 matches for performance):
---
File: lib/dalmatian.rb
L40: require_relative "dalmatian/aurora"
L41: require_relative "dalmatian/aurora_test"
L42: require_relative "dalmatian/aurora_deployment"
---
File: lib/dalmatian/aurora_deployment.rb
L2: class AuroraDeployment
L6: aurora:,
L13: @aurora = aurora
L25: attr_reader :aurora, :env, :helper, :logger, :auto_approve, :plan, :destroy, :verbose
L31: "aurora"
L37: "account_id" => aurora.account_id,
L38: "infrastructure_name" => aurora.cluster_id,
L40: "aurora" => aurora.to_params
L49: [aurora.cluster_id, aurora.identifier, "aurora", env_name].join("-")
---
File: lib/dalmatian/cluster.rb
L18: @auroras = build_auroras(properties["aurora"])
L24: attr_reader :hosted_zones, :s3s, :vpn_customer_gateways, :sources, :services, :wafs, :rdss, :auroras, :elasticache_clusters,
L73: auroras.each do |aurora|
L74: unless skip_deployments.include?("aurora")
L75: deploy_aurora(aurora, environment, test, auto_approve, plan, destroy, verbose)
L167: def build_auroras(aurora_references)
L168: (aurora_references || []).map do |reference|
L169: Aurora.new(cluster: self, reference: reference)
L272: def deploy_aurora(aurora, env, test, auto_approve, plan, destroy, verbose)
L273: test_aurora(aurora, env) if test
L274: AuroraDeployment.new(
L275: aurora: aurora,
L356: def test_aurora(aurora, env)
L357: puts "debug Testing Aurora"
L358: AuroraTest.new(aurora: aurora, env: env).call
---
File: lib/dalmatian/aurora.rb
L2: class Aurora
---
File: lib/dalmatian/aurora_test.rb
L2: class AuroraTest
L6: aurora:,
L12: @aurora = aurora
L21: attr_reader :aurora, :env, :helper, :logger, :terraform
L25: "account_id" => aurora.account_id,
L26: "infrastructure_name" => aurora.cluster_id,
L29: "aurora" => {
L30: "identifier" => aurora.identifier,
L31: "in_use_by" => aurora.in_use_by,
L32: "clusters_in_use" => aurora.clusters_in_use,
L33: "engine" => aurora.engine,
L34: "engine_version" => aurora.engine_version,
L35: "db_name" => aurora.db_name,
L36: "port" => aurora.port,
L37: "maintenance_window" => aurora.maintenance_window,
L38: "backup_window" => aurora.backup_window,
L39: "backup_retention_period" => aurora.backup_retention_period,
L40: "force_ssl" => aurora.force_ssl,
L41: "parameter_store_path_db_url_name" => aurora.parameter_store_path_db_url_name,
L42: "sql_backup_scheduled_task_environment_variables" => aurora.sql_backup_scheduled_task_environment_variables,
L43: "sync_sql_backup_to_azure" => aurora.sync_sql_backup_to_azure,
L44: "replication_bucket_destination_arn" => aurora.replication_bucket_destination_arn,
L45: "replication_kms_key_id" => aurora.replication_kms_key_id
L55: [aurora.cluster_id, aurora.identifier, "aurora", env_name].join("-")
L62: "aurora"
---
File: lib/tasks/dalmatian.rake
L78: skip_deployments = ENV["skip_deployments"] || "hosted-zone,vpn-customer-gateway,ecs,ecs-services,elasticache-cluster,opensearch-cluster,aurora,rds,shared-loadbalancer,waf,s3"
---
File: docs/rds-snapshots-and-point-in-time-restore.md
L89: dalmatian aurora export-dump -i dxw-govpress -r cluster2 -d salukitest -e prod -o /tmp
L95: $ dalmatian aurora shell -i dxw-govpress -r cluster2 -e prod
L132: dalmatian aurora import-dump -i dxw-govpress -r cluster2 -d salukitest -e prod -f /tmp/salukitest-prod-sql-export.sql
L153: - `dxw-govpress-cluster2-aurora-prod-aurora-sg`
L191: #### Aurora
L216: >Remember, `$DB_HOST` should be the **Endpoint** you got from the Aurora Instance earlier!
L246: dalmatian aurora import-dump -i dxw-govpress -r cluster2 -d salukitest -e prod -f ~/Downloads/salukitest.sql
---
File: Terrafile
L51: # https://github.com/terraform-aws-modules/terraform-aws-rds-aurora
L52: terraform-aws-rds-aurora:
L53: source: "git@github.com:terraform-aws-modules/terraform-aws-rds-aurora.git"
---
File: terraform/infrastructures/aurora/data.tf
L52: for_each = toset(local.aurora_extra_clusters_in_use)
L69: count = local.aurora_sql_backup_sync_azure_enabled ? 1 : 0
L74: count = local.aurora_sql_backup_sync_azure_enabled ? 1 : 0
L79: count = local.aurora_sql_backup_sync_azure_enabled ? 1 : 0
L84: count = local.aurora_sql_backup_sync_azure_enabled ? 1 : 0
L89: count = local.aurora_sql_backup_sync_azure_enabled ? 1 : 0
---
File: terraform/infrastructures/aurora/sql-backups-s3.tf
L2: bucket = "${var.infrastructure_name}-${local.aurora_identifier}-sql-backup"
L80: status = var.aurora.replication_bucket_destination_arn != "" ? "Enabled" : "Suspended"
L85: count = var.aurora.replication_bucket_destination_arn != "" ? 1 : 0
L86: name = "${var.infrastructure_name}-${local.aurora_identifier}-sql-backup-replication"
L106: count = var.aurora.replication_bucket_destination_arn != "" ? 1 : 0
L107: name = "${var.infrastructure_name}-${local.aurora_identifier}-sql-backup-replication-policy"
L113: destination_bucket_arn = var.aurora.replication_bucket_destination_arn
L114: destination_kms_key_arn = var.aurora.replication_kms_key_id
L121: count = var.aurora.replication_bucket_destination_arn != "" ? 1 : 0
L130: count = var.aurora.replication_bucket_destination_arn != "" ? 1 : 0
L140: bucket = var.aurora.replication_bucket_destination_arn
L148: for_each = var.aurora.replication_kms_key_id != "" ? [1] : []
L150: replica_kms_key_id = var.aurora.replication_kms_key_id
---
File: terraform/infrastructures/aurora/session-manager-list-databases.tf
L1: resource "aws_ssm_document" "aurora_db_list" {
L2: name = "${local.aurora_identifier}-aurora-db-list"
L5: "../../ssm_documents/aurora-db-list.json.tpl",
L7: aurora_name = module.aurora.cluster_id
L8: command = local.aurora_db_list_command
---
File: terraform/infrastructures/aurora/variables.tf
L43: variable "aurora" {
L44: description = "Aurora configuration"
---
File: terraform/infrastructures/aurora/session-manager-db-shell.tf
L1: resource "aws_ssm_document" "aurora_shell" {
L2: name = "${local.aurora_identifier}-aurora-shell"
L5: "../../ssm_documents/aurora-shell.json.tpl",
L7: aurora_name = module.aurora.cluster_id
L8: command = local.aurora_shell_command
---
File: terraform/infrastructures/aurora/ssm_parameter.tf
L1: resource "aws_kms_key" "aurora_ssm" {
L6: resource "aws_kms_alias" "aurora_ssm" {
L7: name = "alias/${terraform.workspace}-aurora-values-ssm"
L8: target_key_id = aws_kms_key.aurora_ssm.key_id
L11: resource "aws_ssm_parameter" "aurora_db_password" {
---

View File

@@ -0,0 +1,18 @@
Found 8 matches for pattern ""rds" => {" in path ".":
---
File: lib/dalmatian/rds_test.rb
L29: "rds" => {
---
File: spec/integration/deploys_local_configuration_spec.rb
L390: "rds" => {
L467: "rds" => {
---
File: spec/integration/tests_local_configuration_spec.rb
L363: "rds" => {
L496: "rds" => {
L1734: "rds" => {
L1867: "rds" => {
---
File: spec/unit/rds_test_spec.rb
L135: "rds" => {
---

View File

@@ -0,0 +1,18 @@
Found 8 matches for pattern ""rds" => {" in path ".":
---
File: lib/dalmatian/rds_test.rb
L29: "rds" => {
---
File: spec/integration/tests_local_configuration_spec.rb
L363: "rds" => {
L496: "rds" => {
L1734: "rds" => {
L1867: "rds" => {
---
File: spec/integration/deploys_local_configuration_spec.rb
L390: "rds" => {
L467: "rds" => {
---
File: spec/unit/rds_test_spec.rb
L135: "rds" => {
---

View File

@@ -0,0 +1,4 @@
Error during grep search operation: Process exited with code 2: regex parse error:
"rds" => {
^
error: unclosed counted repetition

View File

@@ -0,0 +1,16 @@
Found 8 matches for pattern ""sync_sql_backup_to_azure" => false," in path ".":
---
File: spec/integration/tests_local_configuration_spec.rb
L404: "sync_sql_backup_to_azure" => false,
L535: "sync_sql_backup_to_azure" => false,
L1771: "sync_sql_backup_to_azure" => false,
L1902: "sync_sql_backup_to_azure" => false,
---
File: spec/integration/deploys_local_configuration_spec.rb
L431: "sync_sql_backup_to_azure" => false,
L508: "sync_sql_backup_to_azure" => false,
---
File: spec/unit/rds_spec.rb
L39: "sync_sql_backup_to_azure" => false,
L240: "sync_sql_backup_to_azure" => false,
---

View File

@@ -0,0 +1,16 @@
Found 8 matches for pattern ""sync_sql_backup_to_azure" => false," in path ".":
---
File: spec/unit/rds_spec.rb
L39: "sync_sql_backup_to_azure" => false,
L240: "sync_sql_backup_to_azure" => false,
---
File: spec/integration/deploys_local_configuration_spec.rb
L431: "sync_sql_backup_to_azure" => false,
L506: "sync_sql_backup_to_azure" => false,
---
File: spec/integration/tests_local_configuration_spec.rb
L404: "sync_sql_backup_to_azure" => false,
L535: "sync_sql_backup_to_azure" => false,
L1771: "sync_sql_backup_to_azure" => false,
L1902: "sync_sql_backup_to_azure" => false,
---

View File

@@ -0,0 +1,7 @@
Found 3 matches for pattern "cluster1_validate_cmd_production_rds" in path ".":
---
File: spec/integration/tests_local_configuration_spec.rb
L352: let(:cluster1_validate_cmd_production_rds) do
L3070: allow(Helper).to receive(:run!).with(cluster1_validate_cmd_production_rds)
L3124: expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_rds)
---

View File

@@ -0,0 +1,33 @@
Found 15 matches for pattern "ConfigurationReader.new" in path ".":
---
File: lib/dalmatian/infrastructure.rb
L3: def initialize(configuration = ConfigurationReader.new.infrastructures)
---
File: lib/dalmatian/ci.rb
L7: @ci_variables = ConfigurationReader.new(
---
File: lib/dalmatian/account.rb
L17: @account_bootstrap_variables = ConfigurationReader.new(
---
File: README.md
L115: Infrastructure.new(ConfigurationReader.new(path: 'path/to/yaml/file'))
---
File: spec/integration/deploys_local_configuration_spec.rb
L1576: ConfigurationReader.new(path: "./dalmatian.yml.example", parameter_store: fake_parameter_store).infrastructures
---
File: spec/unit/configuration_reader_spec.rb
L16: ConfigurationReader.new(path: path_to_full_configuration, parameter_store: fake_parameter_store)
L48: ConfigurationReader.new(
L84: ConfigurationReader.new(
L112: ConfigurationReader.new(
L133: ConfigurationReader.new(
L153: ConfigurationReader.new(
L171: ConfigurationReader.new(
---
File: spec/integration/deploys_remote_configuration_spec.rb
L46: ConfigurationReader.new(path: "./dalmatian.yml.source-git.example", parameter_store: fake_parameter_store).call(redownload: true)
L48: ConfigurationReader.new(path: "./dalmatian.yml.source-git.example", parameter_store: fake_parameter_store).infrastructures
---
File: spec/integration/tests_local_configuration_spec.rb
L3107: ConfigurationReader.new(path: "./dalmatian.yml.example", parameter_store: fake_parameter_store).infrastructures
---

View File

@@ -0,0 +1,108 @@
Found 84 matches for pattern "replication" in path ".":
---
File: lib/dalmatian/rds.rb
L109: def replication_bucket_destination_arn
L110: reference["replication_bucket_destination_arn"] || ""
L113: def replication_kms_key_id
L114: reference["replication_kms_key_id"] || ""
L158: "replication_bucket_destination_arn" => replication_bucket_destination_arn,
L159: "replication_kms_key_id" => replication_kms_key_id,
---
File: lib/dalmatian/aurora.rb
L98: def replication_bucket_destination_arn
L99: reference["replication_bucket_destination_arn"] || ""
L102: def replication_kms_key_id
L103: reference["replication_kms_key_id"] || ""
L144: "replication_bucket_destination_arn" => replication_bucket_destination_arn,
L145: "replication_kms_key_id" => replication_kms_key_id
---
File: spec/unit/rds_spec.rb
L40: "replication_bucket_destination_arn" => "arn:aws:s3:::dest-bucket",
L41: "replication_kms_key_id" => "key-id",
L177: describe "#replication_bucket_destination_arn" do
L178: it "uses the rds replication_bucket_destination_arn" do
L179: expect(rds.replication_bucket_destination_arn).to eq("arn:aws:s3:::dest-bucket")
L183: describe "#replication_kms_key_id" do
L184: it "uses the rds replication_kms_key_id" do
L185: expect(rds.replication_kms_key_id).to eq("key-id")
L241: "replication_bucket_destination_arn" => "arn:aws:s3:::dest-bucket",
L242: "replication_kms_key_id" => "key-id",
---
File: docs/database-backups.md
L37: ## Cross-account S3 replication for SQL backups
L47: replication_bucket_destination_arn: "arn:aws:s3:::your-destination-bucket-name"
L48: replication_kms_key_id: "your-destination-kms-key-id"
L53: The destination account must permit the source account's replication role to write to the bucket and use the KMS key.
L57: Add a policy to the destination bucket to allow the replication role from the source account:
L64: "Sid": "AllowReplicationFromDalmatianSource",
L67: "AWS": "arn:aws:iam::<SOURCE_ACCOUNT_ID>:role/<INFRASTRUCTURE_NAME>-<RDS_ID>-sql-backup-replication"
L83: If the destination bucket uses a Customer Managed Key (CMK) for encryption, the key policy must allow the source replication role to use it:
L87: "Sid": "AllowUsageByDalmatianSourceReplicationRole",
L90: "AWS": "arn:aws:iam::<SOURCE_ACCOUNT_ID>:role/<INFRASTRUCTURE_NAME>-<RDS_ID>-sql-backup-replication"
---
File: terraform/infrastructures/rds/sql-backups-s3.tf
L80: status = var.rds.replication_bucket_destination_arn != "" ? "Enabled" : "Suspended"
L84: resource "aws_iam_role" "replication" {
L85: count = var.rds.replication_bucket_destination_arn != "" ? 1 : 0
L86: name = "${var.infrastructure_name}-${local.rds_identifier}-sql-backup-replication"
L105: resource "aws_iam_policy" "replication" {
L106: count = var.rds.replication_bucket_destination_arn != "" ? 1 : 0
L107: name = "${var.infrastructure_name}-${local.rds_identifier}-sql-backup-replication-policy"
L110: "../../policies/s3-replication-policy.json.tpl",
L113: destination_bucket_arn = var.rds.replication_bucket_destination_arn
L114: destination_kms_key_arn = var.rds.replication_kms_key_id
L120: resource "aws_iam_role_policy_attachment" "replication" {
L121: count = var.rds.replication_bucket_destination_arn != "" ? 1 : 0
L122: role = aws_iam_role.replication[0].name
L123: policy_arn = aws_iam_policy.replication[0].arn
L126: resource "aws_s3_bucket_replication_configuration" "sql_backups" {
L130: count = var.rds.replication_bucket_destination_arn != "" ? 1 : 0
L133: role = aws_iam_role.replication[0].arn
L136: id = "replication"
L140: bucket = var.rds.replication_bucket_destination_arn
L148: for_each = var.rds.replication_kms_key_id != "" ? [1] : []
L150: replica_kms_key_id = var.rds.replication_kms_key_id
---
File: terraform/infrastructures/rds/variables.tf
L66: replication_bucket_destination_arn = string
L67: replication_kms_key_id = string
---
File: terraform/infrastructures/elasticache-cluster/ssm-parameter.tf
L7: value = local.elasticache_cluster_engine == "redis" ? "redis://${aws_elasticache_replication_group.elasticache_redis[0].primary_endpoint_address}:${aws_elasticache_replication_group.elasticache_redis[0].port}" : ""
---
File: terraform/infrastructures/elasticache-cluster/elasticache-redis.tf
L23: resource "aws_elasticache_replication_group" "elasticache_redis" {
L26: replication_group_id = local.elasticache_cluster_id
L27: description = "ElastiCache replication group for ${var.infrastructure_name}-${var.environment}-${local.elasticache_cluster_identifier}"
---
File: terraform/infrastructures/aurora/sql-backups-s3.tf
L80: status = var.aurora.replication_bucket_destination_arn != "" ? "Enabled" : "Suspended"
L84: resource "aws_iam_role" "replication" {
L85: count = var.aurora.replication_bucket_destination_arn != "" ? 1 : 0
L86: name = "${var.infrastructure_name}-${local.aurora_identifier}-sql-backup-replication"
L105: resource "aws_iam_policy" "replication" {
L106: count = var.aurora.replication_bucket_destination_arn != "" ? 1 : 0
L107: name = "${var.infrastructure_name}-${local.aurora_identifier}-sql-backup-replication-policy"
L110: "../../policies/s3-replication-policy.json.tpl",
L113: destination_bucket_arn = var.aurora.replication_bucket_destination_arn
L114: destination_kms_key_arn = var.aurora.replication_kms_key_id
L120: resource "aws_iam_role_policy_attachment" "replication" {
L121: count = var.aurora.replication_bucket_destination_arn != "" ? 1 : 0
L122: role = aws_iam_role.replication[0].name
L123: policy_arn = aws_iam_policy.replication[0].arn
L126: resource "aws_s3_bucket_replication_configuration" "sql_backups" {
L130: count = var.aurora.replication_bucket_destination_arn != "" ? 1 : 0
L133: role = aws_iam_role.replication[0].arn
L136: id = "replication"
L140: bucket = var.aurora.replication_bucket_destination_arn
L148: for_each = var.aurora.replication_kms_key_id != "" ? [1] : []
L150: replica_kms_key_id = var.aurora.replication_kms_key_id
---
File: terraform/infrastructures/aurora/variables.tf
L63: replication_bucket_destination_arn = string
L64: replication_kms_key_id = string
---
File: terraform/policies/s3-replication-policy.json.tpl
L6: "s3:GetReplicationConfiguration",
L16: "s3:GetObjectVersionForReplication",
---

View File

@@ -0,0 +1,7 @@
Found 3 matches for pattern "cluster1_validate_cmd_production_aurora" in path ".":
---
File: spec/integration/tests_local_configuration_spec.rb
L551: let(:cluster1_validate_cmd_production_aurora) do
L3311: allow(Helper).to receive(:run!).with(cluster1_validate_cmd_production_aurora)
L3368: expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_aurora)
---

View File

@@ -0,0 +1,92 @@
Found 56 matches for pattern "sql_backup_scheduled_task_environment_variables" in path ".":
---
File: lib/dalmatian/rds.rb
L97: def sql_backup_scheduled_task_environment_variables
L98: reference["sql_backup_scheduled_task_environment_variables"] || []
L101: def check_sql_backup_scheduled_task_environment_variables
L102: reference["check_sql_backup_scheduled_task_environment_variables"] || []
L155: "sql_backup_scheduled_task_environment_variables" => sql_backup_scheduled_task_environment_variables,
L156: "check_sql_backup_scheduled_task_environment_variables" => check_sql_backup_scheduled_task_environment_variables,
---
File: lib/dalmatian/rds_test.rb
L46: "sql_backup_scheduled_task_environment_variables" => rds.sql_backup_scheduled_task_environment_variables,
L47: "check_sql_backup_scheduled_task_environment_variables" => rds.check_sql_backup_scheduled_task_environment_variables,
---
File: lib/dalmatian/aurora.rb
L90: def sql_backup_scheduled_task_environment_variables
L91: reference["sql_backup_scheduled_task_environment_variables"] || []
L142: "sql_backup_scheduled_task_environment_variables" => sql_backup_scheduled_task_environment_variables,
---
File: spec/integration/tests_local_configuration_spec.rb
L394: "sql_backup_scheduled_task_environment_variables" => [
L400: "check_sql_backup_scheduled_task_environment_variables" => [
L527: "sql_backup_scheduled_task_environment_variables" => [
L533: "check_sql_backup_scheduled_task_environment_variables" => [
L1987: "sql_backup_scheduled_task_environment_variables" => [
L1993: "check_sql_backup_scheduled_task_environment_variables" => [
L2120: "sql_backup_scheduled_task_environment_variables" => [
L2126: "check_sql_backup_scheduled_task_environment_variables" => [
---
File: spec/integration/deploys_local_configuration_spec.rb
L421: "sql_backup_scheduled_task_environment_variables" => [
L427: "check_sql_backup_scheduled_task_environment_variables" => [
L486: "sql_backup_scheduled_task_environment_variables" => [
L555: "sql_backup_scheduled_task_environment_variables" => [
---
File: dalmatian.yml.example
L141: sql_backup_scheduled_task_environment_variables:
L144: check_sql_backup_scheduled_task_environment_variables:
---
File: spec/unit/aurora_test_spec.rb
L52: sql_backup_scheduled_task_environment_variables: [
L76: "sql_backup_scheduled_task_environment_variables" => [{"name" => "foo", "value" => "bar"}],
L180: "sql_backup_scheduled_task_environment_variables" => [
---
File: spec/unit/rds_spec.rb
L27: "sql_backup_scheduled_task_environment_variables" => [
L33: "check_sql_backup_scheduled_task_environment_variables" => [
L159: describe "#sql_backup_scheduled_task_environment_variables" do
L160: it "uses the rds sql_backup_scheduled_task_environment_variables" do
L161: expect(rds.sql_backup_scheduled_task_environment_variables).to eq([{"name" => "foo", "value" => "bar"}])
L165: describe "#check_sql_backup_scheduled_task_environment_variables" do
L166: it "uses the rds check_sql_backup_scheduled_task_environment_variables" do
L167: expect(rds.check_sql_backup_scheduled_task_environment_variables).to eq([{"name" => "foo", "value" => "bar"}])
L228: "sql_backup_scheduled_task_environment_variables" => [
L234: "check_sql_backup_scheduled_task_environment_variables" => [
---
File: spec/unit/rds_test_spec.rb
L51: sql_backup_scheduled_task_environment_variables: [
L57: check_sql_backup_scheduled_task_environment_variables: [
L168: "sql_backup_scheduled_task_environment_variables" => [
L174: "check_sql_backup_scheduled_task_environment_variables" => [
---
File: spec/unit/aurora_spec.rb
L28: "sql_backup_scheduled_task_environment_variables" => [
L139: describe "#sql_backup_scheduled_task_environment_variables" do
L140: it "uses the aurora sql_backup_scheduled_task_environment_variables" do
L141: expect(aurora.sql_backup_scheduled_task_environment_variables).to eq([{"name" => "foo", "value" => "bar"}])
L197: "sql_backup_scheduled_task_environment_variables" => [
---
File: terraform/infrastructures/rds/variables.tf
L63: sql_backup_scheduled_task_environment_variables = list(map(string))
L64: check_sql_backup_scheduled_task_environment_variables = list(map(string))
---
File: terraform/infrastructures/rds/check_sql_backups_scheduled_task.tf
L70: environment = jsonencode(local.rds_check_sql_backup_scheduled_task_environment_variables)
---
File: terraform/infrastructures/rds/sql-backups-scheduled-task.tf
L72: environment = jsonencode(local.rds_sql_backup_scheduled_task_environment_variables)
---
File: terraform/infrastructures/rds/locals.tf
L18: rds_sql_backup_scheduled_task_environment_variables = var.rds["sql_backup_scheduled_task_environment_variables"]
L19: rds_check_sql_backup_scheduled_task_environment_variables = var.rds["check_sql_backup_scheduled_task_environment_variables"]
---
File: terraform/infrastructures/aurora/variables.tf
L61: sql_backup_scheduled_task_environment_variables = list(map(string))
---
File: terraform/infrastructures/aurora/sql-backups-scheduled-task.tf
L72: environment = jsonencode(local.aurora_sql_backup_scheduled_task_environment_variables)
---
File: terraform/infrastructures/aurora/locals.tf
L17: aurora_sql_backup_scheduled_task_environment_variables = var.aurora["sql_backup_scheduled_task_environment_variables"]
---

View File

@@ -0,0 +1,4 @@
Directory listing for /Users/bob/git/dxw/dalmatian/spec/integration:
deploys_local_configuration_spec.rb
deploys_remote_configuration_spec.rb
tests_local_configuration_spec.rb

View File

@@ -0,0 +1,46 @@
Directory listing for /Users/bob/git/dxw/dalmatian/spec/unit:
account_spec.rb
cache_handler_selector_spec.rb
cache_handler_spec.rb
ci_spec.rb
cluster_deployment_spec.rb
cluster_spec.rb
cluster_test_spec.rb
configuration_reader_spec.rb
elasticache_cluster_spec.rb
elasticache_cluster_test_spec.rb
git_cache_handler_spec.rb
helper_spec.rb
hosted_zone_deployment_spec.rb
hosted_zone_spec.rb
hosted_zone_test_spec.rb
infrastructure_spec.rb
logger_spec.rb
opensearch_cluster_spec.rb
opensearch_cluster_test_spec.rb
parameter_store_spec.rb
rds_deployment_spec.rb
rds_spec.rb
rds_test_spec.rb
remote_reference_values_spec.rb
s3_cache_handler_spec.rb
s3_deployment_spec.rb
s3_spec.rb
s3_test_spec.rb
service_deployment_spec.rb
service_spec.rb
service_test_spec.rb
shared_loadbalancer_deployment_spec.rb
shared_loadbalancer_spec.rb
shared_loadbalancer_test_spec.rb
source_deployment_spec.rb
source_spec.rb
source_test_spec.rb
terraform_spec.rb
url_cache_handler_spec.rb
vpn_customer_gateway_deployment_spec.rb
vpn_customer_gateway_spec.rb
vpn_customer_gateway_test_spec.rb
waf_deployment_spec.rb
waf_spec.rb
waf_test_spec.rb

View File

@@ -0,0 +1,56 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 131-180 of 437 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 180.
--- FILE CONTENT (truncated) ---
allocated_storage: 20
storage_encrypted: true
storage_type: 'gp3'
db_name: 'testapp'
port: 5432
maintenance_window: 'mon:19:00-mon:19:30'
backup_window: '09:00-10:00'
backup_retention_period: 31
force_ssl: true
parameter_store_path_db_url_name: 'DATABASE_URL'
sql_backup_scheduled_task_environment_variables:
- name: "foo"
value: "bar"
check_sql_backup_scheduled_task_environment_variables:
- name: "foo"
value: "bar"
replication_bucket_destination_arn: "arn:aws:s3:::your-destination-bucket-name"
replication_kms_key_id: "your-destination-kms-key-id"
codebuild_access:
- service-name
aurora:
- identifier: testaurora
in_use_by:
- test-service
engine: 'aurora-postgresql'
engine_version: '11.9'
db_name: 'testapp'
port: 5432
maintenance_window: 'mon:19:00-mon:19:30'
backup_window: '09:00-10:00'
backup_retention_period: 31
force_ssl: true
minimum_size:
staging: 1
production: 2
maximum_size:
staging: 1
production: 2
parameter_store_path_db_url_name: 'DATABASE_URL'
replication_bucket_destination_arn: "arn:aws:s3:::your-destination-bucket-name"
replication_kms_key_id: "your-destination-kms-key-id"
elasticache_cluster:
- identifier: testredis
in_use_by:
- test-service
engine: 'redis'
node_type: 'cache.t2.micro'
node_count: 1
engine: 'redis'
engine_version: '5.0.6'

View File

@@ -0,0 +1,106 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 551-650 of 3518 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 650.
--- FILE CONTENT (truncated) ---
let(:cluster1_validate_cmd_production_aurora) do
terraform_validate_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.medium",
"min_servers" => "2",
"max_servers" => "4",
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",
"environment" => "production",
"aurora" => {
"identifier" => "testaurora",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [
"test"
],
"staging" => [
"test"
]
},
"minimum_size" => {
"production" => 2,
"staging" => 1
},
"maximum_size" => {
"production" => 2,
"staging" => 1
},
"engine" => "aurora-postgresql",
"engine_version" => "11.9",
"db_name" => "testapp",
"port" => 5432,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"force_ssl" => true,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::your-destination-bucket-name",
"replication_kms_key_id" => "your-destination-kms-key-id"
}
)
)
)
end
let(:cluster1_validate_cmd_staging_aurora) do
terraform_validate_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.small",
"min_servers" => "2",
"max_servers" => "4",
"track_revision" => "feature/experiment",
"extra_ecs_clusters" => [
{
"name" => "test",
"subnets_name" => "extra_private_subnets",
"min_servers" => "2",
"max_servers" => "4",
"instance_type" => "t3.small"
}
],
"tinyproxy" => {
"create" => true
},
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",
"environment" => "staging",
"aurora" => {
"identifier" => "testaurora",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [
"test"
],
"staging" => [
"test"
]
},
"minimum_size" => {
"production" => 2,
"staging" => 1
},
"maximum_size" => {
"production" => 2,
"staging" => 1

View File

@@ -0,0 +1,105 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 451-550 of 3518 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 550.
--- FILE CONTENT (truncated) ---
]
},
{
"name" => "AWSManagedRulesCommonRuleSet",
"exclude_rules" => [
"SizeRestrictions_BODY"
]
}
],
"associations" => {
shared_loadbalancers: [
"test-lb-1"
],
service_cloudfront: [
"test-service"
]
}
}
)
)
)
end
let(:cluster1_validate_cmd_staging_rds) do
terraform_validate_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.small",
"min_servers" => "2",
"max_servers" => "4",
"track_revision" => "feature/experiment",
"extra_ecs_clusters" => [
{
"name" => "test",
"subnets_name" => "extra_private_subnets",
"min_servers" => "2",
"max_servers" => "4",
"instance_type" => "t3.small"
}
],
"tinyproxy" => {
"create" => true
},
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",
"environment" => "staging",
"rds" => {
"identifier" => "testservice",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [
"test"
],
"staging" => [
"test"
]
},
"instance_class" => {
"production" => "db.t2.small",
"staging" => "db.t2.micro"
},
"engine" => "postgres",
"engine_version" => "11.4",
"allocated_storage" => 20,
"storage_encrypted" => true,
"storage_type" => "gp3",
"db_name" => "testapp",
"port" => 5432,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"force_ssl" => true,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"check_sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::your-destination-bucket-name",
"replication_kms_key_id" => "your-destination-kms-key-id",
"codebuild_access" => [
"service-name"
]
}
)
)
)
end

View File

@@ -0,0 +1,106 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 1601-1700 of 1717 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 1700.
--- FILE CONTENT (truncated) ---
allow(Helper).to receive(:run!).with(cluster1_apply_cmd_production)
allow(Helper).to receive(:run!).with(cluster1_apply_cmd_staging)
allow(Helper).to receive(:run!).with(cluster1_apply_cmd_production_waf)
allow(Helper).to receive(:run!).with(cluster1_apply_cmd_staging_waf)
allow(Helper).to receive(:run!).with(cluster1_apply_cmd_production_rds)
allow(Helper).to receive(:run!).with(cluster1_apply_cmd_staging_rds)
allow(Helper).to receive(:run!).with(cluster1_apply_cmd_production_elasticache_cluster)
allow(Helper).to receive(:run!).with(cluster1_apply_cmd_staging_elasticache_cluster)
allow(Helper).to receive(:run!).with(cluster1_apply_cmd_production_opensearch_cluster)
allow(Helper).to receive(:run!).with(cluster1_apply_cmd_staging_opensearch_cluster)
allow(Helper).to receive(:run!).with(cluster1_apply_cmd_production_service)
allow(Helper).to receive(:run!).with(cluster1_apply_cmd_staging_service)
allow(Helper).to receive(:run!).with(cluster1_apply_cmd_production_shared_loadbalancer)
allow(Helper).to receive(:run!).with(cluster1_apply_cmd_staging_shared_loadbalancer)
allow(Helper).to receive(:run!).with(cluster2_apply_cmd_production)
allow(Helper).to receive(:run!).with(cluster2_apply_cmd_staging)
allow(Helper).to receive(:run!).with(cluster3_apply_cmd_staging)
allow(Helper).to receive(:run!).with(cluster4_apply_cmd_production)
Infrastructure.new(
ConfigurationReader.new(path: "./dalmatian.yml.example", parameter_store: fake_parameter_store).infrastructures
).deploy(auto_approve: false, plan: false)
end
describe 'use of "terraform apply"' do
it 'passes the hosted zone config to "terraform apply"' do
expect(Helper).to have_received(:run!).with(cluster1_apply_cmd_hz)
end
it 'passes the s3 config to "terraform apply"' do
expect(Helper).to have_received(:run!).with(cluster1_apply_cmd_s3)
end
it 'passes the vpn customer gateway config to "terraform apply"' do
expect(Helper).to have_received(:run!).with(cluster1_apply_cmd_vpn_cg)
end
it 'passes the base config to "terraform apply"' do
expect(Helper).to have_received(:run!).with(cluster1_apply_cmd_production).twice
expect(Helper).to have_received(:run!).with(cluster1_apply_cmd_staging).twice
end
it 'passes the waf config to "terraform apply"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_apply_cmd_production_waf)
expect(Helper).to have_received(:run!).with(cluster1_apply_cmd_staging_waf)
end
end
it 'passes the rds config to "terraform apply"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_apply_cmd_production_rds)
expect(Helper).to have_received(:run!).with(cluster1_apply_cmd_staging_rds)
end
end
it 'passes the elasticache config to "terraform apply"' do
aggregate_failures do
expect(Helper).to have_received(:run!)
.with(cluster1_apply_cmd_production_elasticache_cluster)
expect(Helper).to have_received(:run!)
.with(cluster1_apply_cmd_staging_elasticache_cluster)
end
end
it 'passes the opensearch config to "terraform apply"' do
aggregate_failures do
expect(Helper).to have_received(:run!)
.with(cluster1_apply_cmd_production_opensearch_cluster)
expect(Helper).to have_received(:run!)
.with(cluster1_apply_cmd_staging_opensearch_cluster)
end
end
it 'passes the service config to "terraform apply"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_apply_cmd_production_service)
expect(Helper).to have_received(:run!).with(cluster1_apply_cmd_staging_service)
end
end
it 'passes the loadbalancer config to "terraform apply"' do
aggregate_failures do
expect(Helper).to have_received(:run!)
.with(cluster1_apply_cmd_production_shared_loadbalancer)
expect(Helper).to have_received(:run!)
.with(cluster1_apply_cmd_staging_shared_loadbalancer)
end
end
it 'passes the cluster 2 config to "terraform apply"' do

View File

@@ -0,0 +1,105 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 1501-1600 of 1717 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 1600.
--- FILE CONTENT (truncated) ---
"production" => [],
"staging" => [
"example-domain-name.co.uk"
]
}
},
"internal" => false,
"ip_whitelist" => [
{
"name" => "public",
"cidr" => "0.0.0.0/0"
}
],
"idle_timeout" => "60",
"global_accelerator" => {
"production" => true,
"staging" => false
},
"ssl_policy" => "ELBSecurityPolicy-TLS-1-2-2017-01"
}
)
)
)
end
let(:cluster2_apply_cmd_production) do
terraform_apply_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.medium",
"min_servers" => "2",
"max_servers" => "10",
"environment" => "production",
"cluster_name" => "shared-new-cluster-production",
"account_id" => "123456789012",
"infrastructure_name" => "shared-new-cluster",
"dalmatian_role" => "dalmatian-admin"
)
)
)
end
let(:cluster2_apply_cmd_staging) do
terraform_apply_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.small",
"min_servers" => "2",
"max_servers" => "10",
"environment" => "staging",
"cluster_name" => "shared-new-cluster-staging",
"account_id" => "123456789012",
"infrastructure_name" => "shared-new-cluster",
"dalmatian_role" => "dalmatian-admin"
)
)
)
end
let(:cluster3_apply_cmd_staging) do
terraform_apply_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"example_var" => "foo",
"environment" => "staging",
"cluster_name" => "shared-cluster-staging",
"account_id" => "123456789012",
"infrastructure_name" => "existing-shared-cluster-staging",
"dalmatian_role" => "dalmatian-admin"
)
)
)
end
let(:cluster4_apply_cmd_production) do
terraform_apply_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"example_var" => "bar",
"environment" => "production",
"cluster_name" => "shared-cluster-production",
"account_id" => "123456789012",
"infrastructure_name" => "existing-shared-cluster-production",
"dalmatian_role" => "dalmatian-admin"
)
)
)
end
before do
allow(Logger).to receive(:info)
stub_git_checkout
stub_workspace_creation_cmds
allow(Helper).to receive(:run!).with(cluster1_apply_cmd_hz)
allow(Helper).to receive(:run!).with(cluster1_apply_cmd_s3)
allow(Helper).to receive(:run!).with(cluster1_apply_cmd_vpn_cg)

View File

@@ -0,0 +1,106 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 781-880 of 3543 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 880.
--- FILE CONTENT (truncated) ---
"test"
],
"staging" => [
"test"
]
},
"minimum_size" => {
"production" => 2,
"staging" => 1
},
"maximum_size" => {
"production" => 2,
"staging" => 1
},
"engine" => "aurora-postgresql",
"engine_version" => "11.9",
"db_name" => "testapp",
"port" => 5432,
"force_ssl" => true,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"check_sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::your-destination-bucket-name",
"replication_kms_key_id" => "your-destination-kms-key-id"
}
)
)
)
end
let(:cluster1_validate_cmd_production_elasticache_cluster) do
terraform_validate_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.medium",
"min_servers" => "2",
"max_servers" => "4",
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",
"environment" => "production",
"elasticache_cluster" => {
"identifier" => "testredis",
"in_use_by" => [
"test-service"
],
"node_type" => "cache.t2.micro",
"node_count" => 1,
"engine" => "redis",
"engine_version" => "5.0.6",
"parameters" => [],
"port" => 6379,
"maintenance_window" => "mon:19:00-mon:22:00",
"snapshot_window" => "09:00-10:00",
"parameter_store_path_elasticache_cluster_url_name" => "REDIS_URL"
}
)
)
)
end
let(:cluster1_validate_cmd_staging_elasticache_cluster) do
terraform_validate_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.small",
"min_servers" => "2",
"max_servers" => "4",
"track_revision" => "feature/experiment",
"extra_ecs_clusters" => [
{
"name" => "test",
"subnets_name" => "extra_private_subnets",
"min_servers" => "2",
"max_servers" => "4",
"instance_type" => "t3.small"
}
],
"tinyproxy" => {
"create" => true
},
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",
"environment" => "staging",
"elasticache_cluster" => {

View File

@@ -0,0 +1,106 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 401-500 of 1669 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 500.
--- FILE CONTENT (truncated) ---
"staging" => [
"test"
]
},
"instance_class" => {
"production" => "db.t2.small",
"staging" => "db.t2.micro"
},
"engine" => "postgres",
"engine_version" => "11.4",
"allocated_storage" => 20,
"storage_encrypted" => true,
"storage_type" => "gp3",
"db_name" => "testapp",
"port" => 5432,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"force_ssl" => true,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"check_sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::your-destination-bucket-name",
"replication_kms_key_id" => "your-destination-kms-key-id",
"codebuild_access" => [
"service-name"
]
}
)
)
)
end
let(:cluster1_apply_cmd_staging_rds) do
terraform_apply_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.small",
"min_servers" => "2",
"max_servers" => "4",
"track_revision" => "feature/experiment",
"extra_ecs_clusters" => [
{
"name" => "test",
"subnets_name" => "extra_private_subnets",
"min_servers" => "2",
"max_servers" => "4",
"instance_type" => "t3.small"
}
],
"tinyproxy" => {
"create" => true
},
"environment" => "staging",
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-admin",
"rds" => {
"identifier" => "testservice",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [
"test"
],
"staging" => [
"test"
]
},
"instance_class" => {
"production" => "db.t2.small",
"staging" => "db.t2.micro"
},
"engine" => "postgres",
"engine_version" => "11.4",
"allocated_storage" => 20,
"storage_encrypted" => true,
"storage_type" => "gp3",
"db_name" => "testapp",
"port" => 5432,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"force_ssl" => true,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",

View File

@@ -0,0 +1,106 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 3351-3450 of 3503 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 3450.
--- FILE CONTENT (truncated) ---
allow(Helper).to receive(:run!).with(cluster1_validate_cmd_staging_shared_loadbalancer)
allow(Helper).to receive(:run!).with(cluster1_plan_cmd_production_shared_loadbalancer)
allow(Helper).to receive(:run!).with(cluster1_plan_cmd_staging_shared_loadbalancer)
allow(Helper).to receive(:run!).with(cluster2_validate_cmd_production)
allow(Helper).to receive(:run!).with(cluster2_validate_cmd_staging)
allow(Helper).to receive(:run!).with(cluster2_plan_cmd_production)
allow(Helper).to receive(:run!).with(cluster2_plan_cmd_staging)
allow(Helper).to receive(:run!).with(cluster3_validate_cmd_staging)
allow(Helper).to receive(:run!).with(cluster3_plan_cmd_staging)
allow(Helper).to receive(:run!).with(cluster4_validate_cmd_production)
allow(Helper).to receive(:run!).with(cluster4_plan_cmd_production)
Infrastructure.new(
ConfigurationReader.new(path: "./dalmatian.yml.example", parameter_store: fake_parameter_store).infrastructures
).test
end
it 'invokes "terraform validate" with the expected cluster options' do
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_hz)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_s3)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_vpn_cg)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production).twice
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging).twice
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_waf)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_waf)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_rds)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_rds)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_elasticache_cluster)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_elasticache_cluster)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_opensearch_cluster)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_opensearch_cluster)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_service)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_service)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_shared_loadbalancer)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_shared_loadbalancer)
expect(Helper).to have_received(:run!).with(cluster2_validate_cmd_production)
expect(Helper).to have_received(:run!).with(cluster2_validate_cmd_staging)
expect(Helper).to have_received(:run!).with(cluster3_validate_cmd_staging)
expect(Helper).to have_received(:run!).with(cluster4_validate_cmd_production)
end
it 'invokes "terraform validate" with the expected source options' do
expect(Helper).to have_received(:run!).with(source1_validate_cmd_production).twice
expect(Helper).to have_received(:run!).with(source1_validate_cmd_staging).twice
end
describe 'use of "terraform plan"' do
it 'passes the hosted zone config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_hz)
end
end
it 'passes the s3 config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_s3)
end
end
it 'passes the vpn customer gateway config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_vpn_cg)
end
end
it 'passes the base config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_production).twice
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_staging).twice
end
end
it 'passes the waf config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_production_waf)
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_staging_waf)
end
end
it 'passes the rds config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_production_rds)
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_staging_rds)
end
end

View File

@@ -0,0 +1,106 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 3051-3150 of 3503 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 3150.
--- FILE CONTENT (truncated) ---
"production" => true,
"staging" => false
},
"ssl_policy" => "ELBSecurityPolicy-TLS-1-2-2017-01"
}
)
)
)
end
let(:cluster1_plan_cmd_staging_shared_loadbalancer) do
terraform_plan_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.small",
"min_servers" => "2",
"max_servers" => "4",
"track_revision" => "feature/experiment",
"extra_ecs_clusters" => [
{
"name" => "test",
"subnets_name" => "extra_private_subnets",
"min_servers" => "2",
"max_servers" => "4",
"instance_type" => "t3.small"
}
],
"tinyproxy" => {
"create" => true
},
"environment" => "staging",
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",
"shared_loadbalancer" => {
"name" => "test-lb-1",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [
"test"
],
"staging" => [
"test"
]
},
"subnets_name" => "",
"domain_names" => {
"test-service" => {
"production" => [],
"staging" => [
"example-domain-name.co.uk"
]
}
},
"internal" => false,
"ip_whitelist" => [
{
"name" => "public",
"cidr" => "0.0.0.0/0"
}
],
"idle_timeout" => "60",
"global_accelerator" => {
"production" => true,
"staging" => false
},
"ssl_policy" => "ELBSecurityPolicy-TLS-1-2-2017-01"
}
)
)
)
end
let(:source1_validate_cmd_production) do
terraform_validate_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.medium",
"min_servers" => "2",
"max_servers" => "4",
"cluster_name" => "new-dedicated-cluster-production",
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",
"environment" => "production"
)
)
)
end
let(:source1_validate_cmd_staging) do
terraform_validate_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.small",
"min_servers" => "2",
"max_servers" => "4",
"track_revision" => "feature/experiment",

View File

@@ -0,0 +1,106 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 351-450 of 3253 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 450.
--- FILE CONTENT (truncated) ---
)
end
let(:cluster1_validate_cmd_production_rds) do
terraform_validate_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.medium",
"min_servers" => "2",
"max_servers" => "4",
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",
"environment" => "production",
"rds" => {
"identifier" => "testservice",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [
"test"
],
"staging" => [
"test"
]
},
"instance_class" => {
"production" => "db.t2.small",
"staging" => "db.t2.micro"
},
"engine" => "postgres",
"engine_version" => "11.4",
"allocated_storage" => 20,
"storage_encrypted" => true,
"storage_type" => "gp3",
"db_name" => "testapp",
"port" => 5432,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"force_ssl" => true,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"check_sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::your-destination-bucket-name",
"replication_kms_key_id" => "your-destination-kms-key-id",
"codebuild_access" => [
"service-name"
]
}
)
)
)
end
let(:cluster1_validate_cmd_staging_waf) do
terraform_validate_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.small",
"min_servers" => "2",
"max_servers" => "4",
"track_revision" => "feature/experiment",
"extra_ecs_clusters" => [
{
"name" => "test",
"subnets_name" => "extra_private_subnets",
"min_servers" => "2",
"max_servers" => "4",
"instance_type" => "t3.small"
}
],
"tinyproxy" => {
"create" => true
},
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",
"environment" => "staging",
"waf" => {
"name" => "test-1",
"action" => "count",
"ip_deny_list" => [],
"aws_managed_rules" => [
{
"name" => "AWSManagedRulesSQLiRuleSet",
"excluded_path_patterns" => [
"/wp-admin/async-upload.php"

View File

@@ -0,0 +1,106 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 681-780 of 3543 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 780.
--- FILE CONTENT (truncated) ---
"replication_kms_key_id" => "your-destination-kms-key-id"
}
)
)
)
end
let(:cluster1_plan_cmd_production_aurora) do
terraform_plan_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.medium",
"min_servers" => "2",
"max_servers" => "4",
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",
"environment" => "production",
"aurora" => {
"identifier" => "testaurora",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [
"test"
],
"staging" => [
"test"
]
},
"minimum_size" => {
"production" => 2,
"staging" => 1
},
"maximum_size" => {
"production" => 2,
"staging" => 1
},
"engine" => "aurora-postgresql",
"engine_version" => "11.9",
"db_name" => "testapp",
"port" => 5432,
"force_ssl" => true,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"check_sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::your-destination-bucket-name",
"replication_kms_key_id" => "your-destination-kms-key-id"
}
)
)
)
end
let(:cluster1_plan_cmd_staging_aurora) do
terraform_plan_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.small",
"min_servers" => "2",
"max_servers" => "4",
"track_revision" => "feature/experiment",
"extra_ecs_clusters" => [
{
"name" => "test",
"subnets_name" => "extra_private_subnets",
"min_servers" => "2",
"max_servers" => "4",
"instance_type" => "t3.small"
}
],
"tinyproxy" => {
"create" => true
},
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",
"environment" => "staging",
"aurora" => {
"identifier" => "testaurora",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [

View File

@@ -0,0 +1,106 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 31-130 of 3251 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 130.
--- FILE CONTENT (truncated) ---
new-dedicated-cluster-ecs-staging
new-dedicated-cluster-0-production
new-dedicated-cluster-0-staging
new-dedicated-cluster-test-service-staging
new-dedicated-cluster-test-service-production
new-dedicated-cluster-test-1-waf-staging
new-dedicated-cluster-test-1-waf-production
new-dedicated-cluster-testservice-rds-staging
new-dedicated-cluster-testservice-rds-production
new-dedicated-cluster-testredis-elasticache-cluster-staging
new-dedicated-cluster-testredis-elasticache-cluster-production
new-dedicated-cluster-testos-opensearch-cluster-staging
new-dedicated-cluster-testos-opensearch-cluster-production
new-dedicated-cluster-test-lb-1-shared-loadbalancer-staging
new-dedicated-cluster-test-lb-1-shared-loadbalancer-production
shared-new-cluster-ecs-production
shared-new-cluster-ecs-staging
existing-shared-cluster-staging-0-staging
existing-shared-cluster-production-0-production
].each do |workspace_name|
allow(Helper).to receive(:run!).with("terraform workspace new #{workspace_name}")
end
end
def stub_git_checkout
allow(Helper).to receive(:run!).with("git checkout feature/experiment")
end
def stub_git_clone_cmds
[
{
source: "git@github.com:dxw/awesome-app-dalmatian-config",
name: "new-dedicated-cluster-0.config"
},
{
source: "git@github.com:dxw/funky-app-dalmatian-config",
name: "existing-shared-cluster-staging-0.config"
},
{
source: "git@github.com:dxw/neat-app-dalmatian-config",
name: "existing-shared-cluster-production-0.config"
},
{
source: "git@github.com:dxw/neat-app-dalmatian-config",
name: "existing-shared-cluster-production-0.config"
}
].each do |src|
allow(Helper).to receive(:run!)
.with("git clone #{src.fetch(:source)} terraform/infrastructures/#{src.fetch(:name)}")
end
end
def infrastructure_defaults
{
"region" => "eu-west-2",
"cidr" => "10.0.0.0/16",
"root_domain_zone" => "dalmatian.dxw.net",
"internal_domain_zone" => "dalmatian.internal",
"ecs_private_subnets" => [
{
"availability_zone" => "eu-west-2a",
"cidr" => "10.0.128.0/24"
},
{
"availability_zone" => "eu-west-2b",
"cidr" => "10.0.129.0/24"
},
{
"availability_zone" => "eu-west-2c",
"cidr" => "10.0.130.0/24"
}
],
"extra_public_subnets" => [
{
"availability_zone" => "eu-west-2a",
"cidr" => "10.0.0.0/24"
},
{
"availability_zone" => "eu-west-2b",
"cidr" => "10.0.1.0/24"
},
{
"availability_zone" => "eu-west-2c",
"cidr" => "10.0.2.0/24"
}
],
"instances_key_name" => "dalmatian-ecs-instances",
"instance_type" => "t2.medium",
"min_servers" => "2",
"max_servers" => "4",
"max_instance_lifetime" => "86400",
"associate_public_ip_address" => "0",
"docker_storage_size" => "40",
"dockerhub_email" => "",
"dockerhub_token" => "",
"enable_efs" => "false",
"encrypt_efs" => "true",
"efs_dirs" => [],
"monitoring_docs_path" => "https://github.com/dxw/dalmatian/docs/monitoring-alarms/"

View File

@@ -0,0 +1,105 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 1-100 of 206 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 100.
--- FILE CONTENT (truncated) ---
require_relative "../support/spec_helper"
module Dalmatian
RSpec.describe AuroraTest do
let(:helper) { class_double(Helper, change_to: true, run!: true, git_checkout: true) }
let(:logger) { class_double(Logger, info: true) }
let(:terraform) do
class_double(
Terraform,
fmt: true,
init: true,
validate: true,
ensure_presence_of_workspace: true
)
end
let(:aurora) do
instance_double(
Aurora,
identifier: "testaurora",
in_use_by: [
"test-service"
],
clusters_in_use: {
"production" => [
"test"
],
"staging" => [
"test"
]
},
minimum_size: {
"production" => 2,
"staging" => 1
},
maximum_size: {
"production" => 2,
"staging" => 1
},
engine: "aurora-postgresql",
engine_version: "11.9",
db_name: "testapp",
port: 5432,
maintenance_window: "mon:19:00-mon:19:30",
backup_window: "09:00-10:00",
backup_retention_period: 31,
force_ssl: true,
cluster_id: "new-dedicated-cluster-id",
account_id: 123456789012,
parameter_store_path_db_url_name: "DATABASE_URL",
sql_backup_scheduled_task_environment_variables: [
{
"name" => "foo",
"value" => "bar"
}
],
sync_sql_backup_to_azure: false,
replication_bucket_destination_arn: "arn:aws:s3:::dest-bucket",
replication_kms_key_id: "key-id",
to_params: {
"identifier" => "testaurora",
"in_use_by" => ["test-service"],
"clusters_in_use" => {"production" => ["test"], "staging" => ["test"]},
"minimum_size" => {"production" => 2, "staging" => 1},
"maximum_size" => {"production" => 2, "staging" => 1},
"engine" => "aurora-postgresql",
"engine_version" => "11.9",
"db_name" => "testapp",
"port" => 5432,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"force_ssl" => true,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [{"name" => "foo", "value" => "bar"}],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::dest-bucket",
"replication_kms_key_id" => "key-id"
}
)
end
let!(:aurora_test) do
AuroraTest.new(
aurora: aurora,
env: {name: "staging", details: {"min_servers" => 2, "max_servers" => 4}},
helper: helper,
logger: logger,
terraform: terraform
)
end
describe "#call" do
before do
allow(Terraform).to receive(:init)
allow(Terraform).to receive(:ensure_presence_of_workspace)
allow(logger).to receive(:info)
end

View File

@@ -0,0 +1,149 @@
module Dalmatian
class Aurora
def initialize(cluster:, reference:)
@cluster = cluster
@reference = reference
end
attr_reader :cluster, :reference
def identifier
reference["identifier"]
end
def in_use_by
reference["in_use_by"] || []
end
def clusters_in_use
clusters_in_use = {}
cluster_environments.each do |env_name, _env_config|
clusters_in_use[env_name] = []
services.each do |service|
in_use_by.each do |service_name|
next if service.name != service_name
clusters_in_use[env_name] << if service.launch_on_cluster.empty?
"default_dalmatian_ecs_cluster"
else
service.launch_on_cluster
end
end
end
if in_use_by.empty?
clusters_in_use[env_name] << "default_dalmatian_ecs_cluster" ## fails SimpleCov
end
clusters_in_use[env_name] = clusters_in_use[env_name].uniq
end
clusters_in_use
end
def minimum_size
minimum_size = {}
reference["minimum_size"] = {} if reference["minimum_size"].nil?
cluster_environments.each do |env_name, _env_config|
minimum_size[env_name] = reference["minimum_size"][env_name] || ""
end
minimum_size
end
def maximum_size
maximum_size = {}
reference["maximum_size"] = {} if reference["maximum_size"].nil?
cluster_environments.each do |env_name, _env_config|
maximum_size[env_name] = reference["maximum_size"][env_name] || ""
end
maximum_size
end
def engine
reference["engine"]
end
def engine_version
reference["engine_version"]
end
def db_name
reference["db_name"]
end
def port
reference["port"] || ""
end
def maintenance_window
reference["maintenance_window"] || ""
end
def backup_window
reference["backup_window"] || ""
end
def backup_retention_period
reference["backup_retention_period"] || ""
end
def parameter_store_path_db_url_name
reference["parameter_store_path_db_url_name"] || ""
end
def sql_backup_scheduled_task_environment_variables
reference["sql_backup_scheduled_task_environment_variables"] || []
end
def sync_sql_backup_to_azure
reference["sync_sql_backup_to_azure"] || false
end
def replication_bucket_destination_arn
reference["replication_bucket_destination_arn"] || ""
end
def replication_kms_key_id
reference["replication_kms_key_id"] || ""
end
def force_ssl
reference["force_ssl"] || "false"
end
def cluster_id
cluster.id
end
def cluster_environments
cluster.environments
end
def account_id
cluster.account_id
end
def services
cluster.services
end
def to_params
{
"identifier" => identifier,
"in_use_by" => in_use_by,
"clusters_in_use" => clusters_in_use,
"minimum_size" => minimum_size,
"maximum_size" => maximum_size,
"engine" => engine,
"engine_version" => engine_version,
"db_name" => db_name,
"port" => port,
"force_ssl" => force_ssl,
"maintenance_window" => maintenance_window,
"backup_window" => backup_window,
"backup_retention_period" => backup_retention_period,
"parameter_store_path_db_url_name" => parameter_store_path_db_url_name,
"sql_backup_scheduled_task_environment_variables" => sql_backup_scheduled_task_environment_variables,
"sync_sql_backup_to_azure" => sync_sql_backup_to_azure,
"replication_bucket_destination_arn" => replication_bucket_destination_arn,
"replication_kms_key_id" => replication_kms_key_id
}
end
end
end

View File

@@ -0,0 +1,126 @@
require_relative "../support/spec_helper"
module Dalmatian
RSpec.describe RdsDeployment do
before do
allow(Helper).to receive(:change_to)
allow(Helper).to receive(:run!)
allow(Logger).to receive(:info)
end
let(:rds_params) { double("rds_params") }
let(:rds) do
instance_double(
Rds,
identifier: "testservice",
cluster_id: "new-dedicated-cluster-id",
account_id: 123456789012,
to_params: rds_params
)
end
let(:deployment) do
RdsDeployment.new(
rds: rds,
env: {name: "staging", details: {"min_servers" => 2, "max_servers" => 4}},
auto_approve: false,
plan: false
)
end
describe "#call" do
before do
allow(Terraform).to receive(:apply)
allow(Terraform).to receive(:plan)
allow(Terraform).to receive(:ensure_presence_of_workspace)
end
it "changes to rds infrastructure directory" do
directory = File.join(
Infrastructure::APP_ROOT,
Infrastructure::PATH,
"rds"
)
deployment.call
expect(Helper).to have_received(:change_to).with(directory)
end
it "asks Terraform to ensure that the workspace is in place" do
deployment.call
expect(Terraform).to have_received(:ensure_presence_of_workspace)
.with("new-dedicated-cluster-id-testservice-rds-staging")
end
context "when in _plan_ mode" do
let(:deployment) do
RdsDeployment.new(
rds: rds,
env: {name: "staging", details: {"min_servers" => 2, "max_servers" => 4}},
auto_approve: false,
plan: true
)
end
it "invokes Terraform.plan with the _dalmatian-read_ role" do
deployment.call
expect(Terraform).to have_received(:plan).with(
tfvars: {
"min_servers" => 2,
"max_servers" => 4,
"rds" => rds_params,
"account_id" => 123456789012,
"infrastructure_name" => "new-dedicated-cluster-id",
"environment" => "staging",
"dalmatian_role" => "dalmatian-read"
},
verbose: false
)
end
end
context "when NOT in _plan_ mode" do
it "invokes Terraform.apply with the _dalmatian-admin_ role" do
deployment.call
expect(Terraform).to have_received(:apply).with(
tfvars: {
"min_servers" => 2,
"max_servers" => 4,
"rds" => rds_params,
"account_id" => 123456789012,
"infrastructure_name" => "new-dedicated-cluster-id",
"environment" => "staging",
"dalmatian_role" => "dalmatian-admin"
},
auto_approve: false,
verbose: false
)
end
context "when in _auto_approve_ mode" do
let(:deployment) do
RdsDeployment.new(
rds: rds,
env: {name: "staging", details: {"min_servers" => 2, "max_servers" => 4}},
auto_approve: true,
plan: false
)
end
it "asks Terraform to use auto_approve mode" do
deployment.call
expect(Terraform).to have_received(:apply).with(
hash_including(auto_approve: true)
)
end
end
end
end
end
end

View File

@@ -0,0 +1,199 @@
require_relative "../support/spec_helper"
module Dalmatian
RSpec.describe RdsTest do
let(:helper) { class_double(Helper, change_to: true, run!: true, git_checkout: true) }
let(:logger) { class_double(Logger, info: true) }
let(:terraform) do
class_double(
Terraform,
fmt: true,
init: true,
validate: true,
ensure_presence_of_workspace: true
)
end
let(:rds) do
instance_double(
Rds,
identifier: "testservice",
in_use_by: [
"test-service"
],
clusters_in_use: {
"production" => [
"test"
],
"staging" => [
"test"
]
},
instance_class: {
"production" => "db.t2.small",
"staging" => "db.t2.micro"
},
engine: "postgres",
engine_version: "11.4",
allocated_storage: 20,
storage_encrypted: true,
storage_type: "gp3",
db_name: "testapp",
port: 5432,
maintenance_window: "mon:19:00-mon:19:30",
backup_window: "09:00-10:00",
backup_retention_period: 31,
force_ssl: true,
cluster_id: "new-dedicated-cluster-id",
account_id: 123456789012,
parameter_store_path_db_url_name: "DATABASE_URL",
sql_backup_scheduled_task_environment_variables: [
{
"name" => "foo",
"value" => "bar"
}
],
check_sql_backup_scheduled_task_environment_variables: [
{
"name" => "foo",
"value" => "bar"
}
],
sync_sql_backup_to_azure: false,
replication_bucket_destination_arn: "arn:aws:s3:::dest-bucket",
replication_kms_key_id: "key-id",
codebuild_access: [
"service-name"
]
)
end
let!(:rds_test) do
RdsTest.new(
rds: rds,
env: {name: "staging", details: {"min_servers" => 2, "max_servers" => 4}},
helper: helper,
logger: logger,
terraform: terraform
)
end
describe "#call" do
before do
allow(Terraform).to receive(:init)
allow(Terraform).to receive(:ensure_presence_of_workspace)
allow(logger).to receive(:info)
end
let(:workspace_name) { "new-dedicated-cluster-id-testservice-rds-staging" }
before { rds_test.call }
it "changes to the ecs-services directory" do
directory = File.join(
Infrastructure::APP_ROOT,
Infrastructure::PATH,
"rds"
)
expect(helper).to have_received(:change_to).with(directory)
end
it "logs our intention to run Terraform init" do
expect(logger).to have_received(:info).with(
"Running terraform init for #{workspace_name}"
)
end
it "runs Terraform init, with upgrade option" do
expect(terraform).to have_received(:init).with(upgrade: true)
end
it "ensures presence of workspace" do
expect(terraform).to have_received(:ensure_presence_of_workspace)
.with(workspace_name)
end
it "logs our intention to run Terraform fmt" do
expect(logger).to have_received(:info).with(
"Running terraform fmt for #{workspace_name}"
)
end
it "runs Terraform fmt with check and diff options" do
expect(terraform).to have_received(:fmt).with("-check -diff")
end
it "logs our intention to run Terraform validate" do
expect(logger).to have_received(:info).with(
"Running terraform validate for #{workspace_name}"
)
end
it "runs Terraform validate, with upgrade option" do
env_config = {"min_servers" => 2,
"max_servers" => 4,
"rds" => {
"identifier" => "testservice",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [
"test"
],
"staging" => [
"test"
]
},
"instance_class" => {
"production" => "db.t2.small",
"staging" => "db.t2.micro"
},
"engine" => "postgres",
"engine_version" => "11.4",
"allocated_storage" => 20,
"storage_encrypted" => true,
"storage_type" => "gp3",
"db_name" => "testapp",
"port" => 5432,
"replication_bucket_destination_arn" => "arn:aws:s3:::dest-bucket",
"replication_kms_key_id" => "key-id",
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"force_ssl" => true,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"check_sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"codebuild_access" => [
"service-name"
],
"sync_sql_backup_to_azure" => false
},
"account_id" => 123456789012,
"infrastructure_name" => "new-dedicated-cluster-id",
"environment" => "staging",
"dalmatian_role" => "dalmatian-read"}
expect(terraform).to have_received(:validate).with(
tfvars: env_config
)
end
it "changes back to the app root directory" do
expect(helper).to have_received(:change_to).with(Infrastructure::APP_ROOT)
end
end
end
end

View File

@@ -0,0 +1,252 @@
require_relative "../support/spec_helper"
module Dalmatian
RSpec.describe Rds do
let(:rds_reference) do
{
"identifier" => "testservice",
"in_use_by" => [
"test-service"
],
"instance_class" => {
"production" => "db.t2.small",
"staging" => "db.t2.micro"
},
"engine" => "postgres",
"engine_version" => "11.4",
"allocated_storage" => 20,
"storage_encrypted" => true,
"storage_type" => "gp3",
"db_name" => "testapp",
"port" => 5432,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"force_ssl" => true,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"check_sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::dest-bucket",
"replication_kms_key_id" => "key-id",
"codebuild_access" => [
"service-name"
]
}
end
let(:cluster) do
instance_double(
Cluster,
id: "new-dedicated-cluster",
name: "new-dedicated-cluster",
environments: {"staging" => {}, "production" => {}},
services: [double(name: "test-service", domain_names: {"staging" => ["example-domain-name.co.uk"]}, launch_on_cluster: "test"),
double(name: "test-service", domain_names: {"staging" => ["example-domain-name.co.uk"]}, launch_on_cluster: "")],
account_id: 123456789012,
rdss: [double(reference: {identifier: "someotherrds"}),
double(reference: rds_reference)]
)
end
let(:rds) { Rds.new(cluster: cluster, reference: rds_reference) }
describe "#identifier" do
it "uses rds identifier" do
expect(rds.identifier).to eq("testservice")
end
end
describe "#in_use_by" do
it "uses rds in_use_by list" do
expect(rds.in_use_by).to eq(["test-service"])
end
end
describe "#clusters_in_use" do
it "uses rds clusters_in_use list" do
expect(rds.clusters_in_use).to eq({"staging" => ["test", "default_dalmatian_ecs_cluster"], "production" => ["test", "default_dalmatian_ecs_cluster"]})
end
end
describe "#instance_class" do
it "uses rds instance_class" do
expect(rds.instance_class).to eq({"production" => "db.t2.small", "staging" => "db.t2.micro"})
end
end
describe "#engine" do
it "uses rds engine" do
expect(rds.engine).to eq("postgres")
end
end
describe "#engine_version" do
it "uses the rds engine_version" do
expect(rds.engine_version).to eq("11.4")
end
end
describe "#allocated_storage" do
it "uses the rds allocated_storage" do
expect(rds.allocated_storage).to eq(20)
end
end
describe "#storage_encrypted" do
it "uses the rds storage_encrypted bool" do
expect(rds.storage_encrypted).to eq(true)
end
end
describe "#storage_type" do
it "uses the rds storage_type gp3" do
expect(rds.storage_type).to eq("gp3")
end
end
describe "#db_name" do
it "uses the rds db_name" do
expect(rds.db_name).to eq("testapp")
end
end
describe "#port" do
it "uses the rds port" do
expect(rds.port).to eq(5432)
end
end
describe "#maintenance_window" do
it "uses the rds maintenance_window" do
expect(rds.maintenance_window).to eq("mon:19:00-mon:19:30")
end
end
describe "#backup_window" do
it "uses the rds backup_window" do
expect(rds.backup_window).to eq("09:00-10:00")
end
end
describe "#backup_retention_period" do
it "uses the rds backup_retention_period" do
expect(rds.backup_retention_period).to eq(31)
end
end
describe "#force_ssl" do
it "uses the rds force_ssl bool" do
expect(rds.force_ssl).to eq(true)
end
end
describe "#parameter_store_path_db_url_name" do
it "uses the rds parameter_store_path_db_url_name" do
expect(rds.parameter_store_path_db_url_name).to eq("DATABASE_URL")
end
end
describe "#sql_backup_scheduled_task_environment_variables" do
it "uses the rds sql_backup_scheduled_task_environment_variables" do
expect(rds.sql_backup_scheduled_task_environment_variables).to eq([{"name" => "foo", "value" => "bar"}])
end
end
describe "#check_sql_backup_scheduled_task_environment_variables" do
it "uses the rds check_sql_backup_scheduled_task_environment_variables" do
expect(rds.check_sql_backup_scheduled_task_environment_variables).to eq([{"name" => "foo", "value" => "bar"}])
end
end
describe "#sync_sql_backup_to_azure" do
it "will have offsite backups disabled by default" do
expect(rds.sync_sql_backup_to_azure).to eq(false)
end
end
describe "#replication_bucket_destination_arn" do
it "uses the rds replication_bucket_destination_arn" do
expect(rds.replication_bucket_destination_arn).to eq("arn:aws:s3:::dest-bucket")
end
end
describe "#replication_kms_key_id" do
it "uses the rds replication_kms_key_id" do
expect(rds.replication_kms_key_id).to eq("key-id")
end
end
describe "#codebuild_access" do
it "uses the rds codebuild_access" do
expect(rds.codebuild_access).to eq(["service-name"])
end
end
describe "#to_params" do
it "provides a hash of attributes for use in deployment" do
expected_params = {
"identifier" => "testservice",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [
"test",
"default_dalmatian_ecs_cluster"
],
"staging" => [
"test",
"default_dalmatian_ecs_cluster"
]
},
"instance_class" => {
"production" => "db.t2.small",
"staging" => "db.t2.micro"
},
"engine" => "postgres",
"engine_version" => "11.4",
"allocated_storage" => 20,
"storage_encrypted" => true,
"storage_type" => "gp3",
"db_name" => "testapp",
"port" => 5432,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"force_ssl" => true,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"check_sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::dest-bucket",
"replication_kms_key_id" => "key-id",
"codebuild_access" => [
"service-name"
]
}
expect(rds.to_params).to eq(expected_params)
end
end
end
end

View File

@@ -0,0 +1,149 @@
module Dalmatian
class Aurora
def initialize(cluster:, reference:)
@cluster = cluster
@reference = reference
end
attr_reader :cluster, :reference
def identifier
reference["identifier"]
end
def in_use_by
reference["in_use_by"] || []
end
def clusters_in_use
clusters_in_use = {}
cluster_environments.each do |env_name, _env_config|
clusters_in_use[env_name] = []
services.each do |service|
in_use_by.each do |service_name|
next if service.name != service_name
clusters_in_use[env_name] << if service.launch_on_cluster.empty?
"default_dalmatian_ecs_cluster"
else
service.launch_on_cluster
end
end
end
if in_use_by.empty?
clusters_in_use[env_name] << "default_dalmatian_ecs_cluster" ## fails SimpleCov
end
clusters_in_use[env_name] = clusters_in_use[env_name].uniq
end
clusters_in_use
end
def minimum_size
minimum_size = {}
reference["minimum_size"] = {} if reference["minimum_size"].nil?
cluster_environments.each do |env_name, _env_config|
minimum_size[env_name] = reference["minimum_size"][env_name] || ""
end
minimum_size
end
def maximum_size
maximum_size = {}
reference["maximum_size"] = {} if reference["maximum_size"].nil?
cluster_environments.each do |env_name, _env_config|
maximum_size[env_name] = reference["maximum_size"][env_name] || ""
end
maximum_size
end
def engine
reference["engine"]
end
def engine_version
reference["engine_version"]
end
def db_name
reference["db_name"]
end
def port
reference["port"] || ""
end
def maintenance_window
reference["maintenance_window"] || ""
end
def backup_window
reference["backup_window"] || ""
end
def backup_retention_period
reference["backup_retention_period"] || ""
end
def parameter_store_path_db_url_name
reference["parameter_store_path_db_url_name"] || ""
end
def sql_backup_scheduled_task_environment_variables
reference["sql_backup_scheduled_task_environment_variables"] || []
end
def sync_sql_backup_to_azure
reference["sync_sql_backup_to_azure"] || false
end
def replication_bucket_destination_arn
reference["replication_bucket_destination_arn"] || ""
end
def replication_kms_key_id
reference["replication_kms_key_id"] || ""
end
def force_ssl
reference["force_ssl"] || "false"
end
def cluster_id
cluster.id
end
def cluster_environments
cluster.environments
end
def account_id
cluster.account_id
end
def services
cluster.services
end
def to_params
{
"identifier" => identifier,
"in_use_by" => in_use_by,
"clusters_in_use" => clusters_in_use,
"minimum_size" => minimum_size,
"maximum_size" => maximum_size,
"engine" => engine,
"engine_version" => engine_version,
"db_name" => db_name,
"port" => port,
"force_ssl" => force_ssl,
"maintenance_window" => maintenance_window,
"backup_window" => backup_window,
"backup_retention_period" => backup_retention_period,
"parameter_store_path_db_url_name" => parameter_store_path_db_url_name,
"sql_backup_scheduled_task_environment_variables" => sql_backup_scheduled_task_environment_variables,
"sync_sql_backup_to_azure" => sync_sql_backup_to_azure,
"replication_bucket_destination_arn" => replication_bucket_destination_arn,
"replication_kms_key_id" => replication_kms_key_id
}
end
end
end

View File

@@ -0,0 +1,14 @@
begin
require "rspec/core/rake_task"
RSpec::Core::RakeTask.new(:spec)
rescue LoadError
end
begin
require "standard/rake"
rescue LoadError
end
desc "run tests"
task test: %i[standard spec]

View File

@@ -0,0 +1,106 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 2151-2250 of 3543 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 2250.
--- FILE CONTENT (truncated) ---
],
"staging" => [
"test"
]
},
"instance_class" => {
"production" => "db.t2.small",
"staging" => "db.t2.micro"
},
"engine" => "postgres",
"engine_version" => "11.4",
"allocated_storage" => 20,
"storage_encrypted" => true,
"storage_type" => "gp3",
"db_name" => "testapp",
"port" => 5432,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"force_ssl" => true,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"check_sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::your-destination-bucket-name",
"replication_kms_key_id" => "your-destination-kms-key-id",
"codebuild_access" => [
"service-name"
]
}
)
)
)
end
let(:cluster1_plan_cmd_production_elasticache_cluster) do
terraform_plan_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.medium",
"min_servers" => "2",
"max_servers" => "4",
"environment" => "production",
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",
"elasticache_cluster" => {
"identifier" => "testredis",
"in_use_by" => [
"test-service"
],
"node_type" => "cache.t2.micro",
"node_count" => 1,
"engine" => "redis",
"engine_version" => "5.0.6",
"parameters" => [],
"port" => 6379,
"maintenance_window" => "mon:19:00-mon:22:00",
"snapshot_window" => "09:00-10:00",
"parameter_store_path_elasticache_cluster_url_name" => "REDIS_URL"
}
)
)
)
end
let(:cluster1_plan_cmd_staging_elasticache_cluster) do
terraform_plan_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.small",
"min_servers" => "2",
"max_servers" => "4",
"track_revision" => "feature/experiment",
"extra_ecs_clusters" => [
{
"name" => "test",
"subnets_name" => "extra_private_subnets",
"min_servers" => "2",
"max_servers" => "4",
"instance_type" => "t3.small"
}
],
"tinyproxy" => {
"create" => true
},
"environment" => "staging",
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",

View File

@@ -0,0 +1,6 @@
# Load all Rake tasks
APP_ROOT = File.expand_path(File.dirname("."))
Dir.glob("lib/tasks/*.rake").each { |r| load r }
require "standard/rake"

View File

@@ -0,0 +1,103 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 101-198 of 198 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 198.
--- FILE CONTENT (truncated) ---
end
it "logs our intention to run Terraform init" do
expect(logger).to have_received(:info).with(
"Running terraform init for #{workspace_name}"
)
end
it "runs Terraform init, with upgrade option" do
expect(terraform).to have_received(:init).with(upgrade: true)
end
it "ensures presence of workspace" do
expect(terraform).to have_received(:ensure_presence_of_workspace)
.with(workspace_name)
end
it "logs our intention to run Terraform fmt" do
expect(logger).to have_received(:info).with(
"Running terraform fmt for #{workspace_name}"
)
end
it "runs Terraform fmt with check and diff options" do
expect(terraform).to have_received(:fmt).with("-check -diff")
end
it "logs our intention to run Terraform validate" do
expect(logger).to have_received(:info).with(
"Running terraform validate for #{workspace_name}"
)
end
it "runs Terraform validate, with upgrade option" do
env_config = {"min_servers" => 2,
"max_servers" => 4,
"rds" => {
"identifier" => "testservice",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [
"test"
],
"staging" => [
"test"
]
},
"instance_class" => {
"production" => "db.t2.small",
"staging" => "db.t2.micro"
},
"engine" => "postgres",
"engine_version" => "11.4",
"allocated_storage" => 20,
"storage_encrypted" => true,
"storage_type" => "gp3",
"db_name" => "testapp",
"port" => 5432,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"force_ssl" => true,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"check_sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"codebuild_access" => [
"service-name"
],
"sync_sql_backup_to_azure" => false
},
"account_id" => 123456789012,
"infrastructure_name" => "new-dedicated-cluster-id",
"environment" => "staging",
"dalmatian_role" => "dalmatian-read"}
expect(terraform).to have_received(:validate).with(
tfvars: env_config
)
end
it "changes back to the app root directory" do
expect(helper).to have_received(:change_to).with(Infrastructure::APP_ROOT)
end
end
end
end

View File

@@ -0,0 +1,106 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 2001-2100 of 3543 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 2100.
--- FILE CONTENT (truncated) ---
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.medium",
"min_servers" => "2",
"max_servers" => "4",
"environment" => "production",
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",
"rds" => {
"identifier" => "testservice",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [
"test"
],
"staging" => [
"test"
]
},
"instance_class" => {
"production" => "db.t2.small",
"staging" => "db.t2.micro"
},
"engine" => "postgres",
"engine_version" => "11.4",
"allocated_storage" => 20,
"storage_encrypted" => true,
"storage_type" => "gp3",
"db_name" => "testapp",
"port" => 5432,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"force_ssl" => true,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"check_sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::your-destination-bucket-name",
"replication_kms_key_id" => "your-destination-kms-key-id",
"codebuild_access" => [
"service-name"
]
}
)
)
)
end
let(:cluster1_plan_cmd_staging_waf) do
terraform_plan_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.small",
"min_servers" => "2",
"max_servers" => "4",
"track_revision" => "feature/experiment",
"extra_ecs_clusters" => [
{
"name" => "test",
"subnets_name" => "extra_private_subnets",
"min_servers" => "2",
"max_servers" => "4",
"instance_type" => "t3.small"
}
],
"tinyproxy" => {
"create" => true
},
"environment" => "staging",
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",
"waf" => {
"name" => "test-1",
"action" => "count",
"ip_deny_list" => [],
"aws_managed_rules" => [
{
"name" => "AWSManagedRulesSQLiRuleSet",
"excluded_path_patterns" => [
"/wp-admin/async-upload.php"
]
},
{
"name" => "AWSManagedRulesCommonRuleSet",
"exclude_rules" => [

View File

@@ -0,0 +1,3 @@
{
"error": "File not found: /Users/bob/git/dxw/dalmatian/spec/unit/aurora_test_spec.rb"
}

View File

@@ -0,0 +1,105 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 1-100 of 196 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 100.
--- FILE CONTENT (truncated) ---
require_relative "../support/spec_helper"
module Dalmatian
RSpec.describe RdsTest do
let(:helper) { class_double(Helper, change_to: true, run!: true, git_checkout: true) }
let(:logger) { class_double(Logger, info: true) }
let(:terraform) do
class_double(
Terraform,
fmt: true,
init: true,
validate: true,
ensure_presence_of_workspace: true
)
end
let(:rds) do
instance_double(
Rds,
identifier: "testservice",
in_use_by: [
"test-service"
],
clusters_in_use: {
"production" => [
"test"
],
"staging" => [
"test"
]
},
instance_class: {
"production" => "db.t2.small",
"staging" => "db.t2.micro"
},
engine: "postgres",
engine_version: "11.4",
allocated_storage: 20,
storage_encrypted: true,
storage_type: "gp3",
db_name: "testapp",
port: 5432,
maintenance_window: "mon:19:00-mon:19:30",
backup_window: "09:00-10:00",
backup_retention_period: 31,
force_ssl: true,
cluster_id: "new-dedicated-cluster-id",
account_id: 123456789012,
parameter_store_path_db_url_name: "DATABASE_URL",
sql_backup_scheduled_task_environment_variables: [
{
"name" => "foo",
"value" => "bar"
}
],
check_sql_backup_scheduled_task_environment_variables: [
{
"name" => "foo",
"value" => "bar"
}
],
sync_sql_backup_to_azure: false,
codebuild_access: [
"service-name"
]
)
end
let!(:rds_test) do
RdsTest.new(
rds: rds,
env: {name: "staging", details: {"min_servers" => 2, "max_servers" => 4}},
helper: helper,
logger: logger,
terraform: terraform
)
end
describe "#call" do
before do
allow(Terraform).to receive(:init)
allow(Terraform).to receive(:ensure_presence_of_workspace)
allow(logger).to receive(:info)
end
let(:workspace_name) { "new-dedicated-cluster-id-testservice-rds-staging" }
before { rds_test.call }
it "changes to the ecs-services directory" do
directory = File.join(
Infrastructure::APP_ROOT,
Infrastructure::PATH,
"rds"
)
expect(helper).to have_received(:change_to).with(directory)
end

View File

@@ -0,0 +1,52 @@
module Dalmatian
class AuroraDeployment
include Deployable
def initialize(
aurora:,
env:,
auto_approve: false,
plan: false,
destroy: false,
verbose: false
)
@aurora = aurora
@env = env
@helper = Helper
@logger = Logger
@auto_approve = auto_approve
@plan = plan
@destroy = destroy
@verbose = verbose
end
private
attr_reader :aurora, :env, :helper, :logger, :auto_approve, :plan, :destroy, :verbose
def target_directory
File.join(
Infrastructure::APP_ROOT,
Infrastructure::PATH,
"aurora"
)
end
def env_config
env_details.merge(
"account_id" => aurora.account_id,
"infrastructure_name" => aurora.cluster_id,
"dalmatian_role" => dalmatian_role,
"aurora" => aurora.to_params
)
end
def checkout_revision_if_required
false
end
def workspace_name
[aurora.cluster_id, aurora.identifier, "aurora", env_name].join("-")
end
end
end

View File

@@ -0,0 +1,106 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 581-680 of 3543 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 680.
--- FILE CONTENT (truncated) ---
"staging" => 1
},
"engine" => "aurora-postgresql",
"engine_version" => "11.9",
"db_name" => "testapp",
"port" => 5432,
"force_ssl" => true,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"check_sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::your-destination-bucket-name",
"replication_kms_key_id" => "your-destination-kms-key-id"
}
)
)
)
end
let(:cluster1_validate_cmd_staging_aurora) do
terraform_validate_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.small",
"min_servers" => "2",
"max_servers" => "4",
"track_revision" => "feature/experiment",
"extra_ecs_clusters" => [
{
"name" => "test",
"subnets_name" => "extra_private_subnets",
"min_servers" => "2",
"max_servers" => "4",
"instance_type" => "t3.small"
}
],
"tinyproxy" => {
"create" => true
},
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",
"environment" => "staging",
"aurora" => {
"identifier" => "testaurora",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [
"test"
],
"staging" => [
"test"
]
},
"minimum_size" => {
"production" => 2,
"staging" => 1
},
"maximum_size" => {
"production" => 2,
"staging" => 1
},
"engine" => "aurora-postgresql",
"engine_version" => "11.9",
"db_name" => "testapp",
"port" => 5432,
"force_ssl" => true,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"check_sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::your-destination-bucket-name",

View File

@@ -0,0 +1,64 @@
module Dalmatian
class AuroraTest
include Testable
def initialize(
aurora:,
env:,
helper: Helper,
logger: Logger,
terraform: Terraform
)
@aurora = aurora
@env = env
@helper = helper
@logger = logger
@terraform = terraform
end
private
attr_reader :aurora, :env, :helper, :logger, :terraform
def env_config
env_details.merge(
"account_id" => aurora.account_id,
"infrastructure_name" => aurora.cluster_id,
"dalmatian_role" => "dalmatian-read",
"environment" => env_name,
"aurora" => {
"identifier" => aurora.identifier,
"in_use_by" => aurora.in_use_by,
"clusters_in_use" => aurora.clusters_in_use,
"engine" => aurora.engine,
"engine_version" => aurora.engine_version,
"db_name" => aurora.db_name,
"port" => aurora.port,
"maintenance_window" => aurora.maintenance_window,
"backup_window" => aurora.backup_window,
"backup_retention_period" => aurora.backup_retention_period,
"force_ssl" => aurora.force_ssl,
"parameter_store_path_db_url_name" => aurora.parameter_store_path_db_url_name,
"sql_backup_scheduled_task_environment_variables" => aurora.sql_backup_scheduled_task_environment_variables,
"sync_sql_backup_to_azure" => aurora.sync_sql_backup_to_azure
}
)
end
def checkout_revision_if_required
false
end
def workspace_name
[aurora.cluster_id, aurora.identifier, "aurora", env_name].join("-")
end
def target_directory
File.join(
Infrastructure::APP_ROOT,
Infrastructure::PATH,
"aurora"
)
end
end
end

View File

@@ -0,0 +1,70 @@
module Dalmatian
class RdsTest
include Testable
def initialize(
rds:,
env:,
helper: Helper,
logger: Logger,
terraform: Terraform
)
@rds = rds
@env = env
@helper = helper
@logger = logger
@terraform = terraform
end
private
attr_reader :rds, :env, :helper, :logger, :terraform
def env_config
env_details.merge(
"account_id" => rds.account_id,
"infrastructure_name" => rds.cluster_id,
"dalmatian_role" => "dalmatian-read",
"environment" => env_name,
"rds" => {
"identifier" => rds.identifier,
"in_use_by" => rds.in_use_by,
"clusters_in_use" => rds.clusters_in_use,
"instance_class" => rds.instance_class,
"engine" => rds.engine,
"engine_version" => rds.engine_version,
"allocated_storage" => rds.allocated_storage,
"storage_encrypted" => rds.storage_encrypted,
"storage_type" => rds.storage_type,
"db_name" => rds.db_name,
"port" => rds.port,
"maintenance_window" => rds.maintenance_window,
"backup_window" => rds.backup_window,
"backup_retention_period" => rds.backup_retention_period,
"force_ssl" => rds.force_ssl,
"parameter_store_path_db_url_name" => rds.parameter_store_path_db_url_name,
"sql_backup_scheduled_task_environment_variables" => rds.sql_backup_scheduled_task_environment_variables,
"check_sql_backup_scheduled_task_environment_variables" => rds.check_sql_backup_scheduled_task_environment_variables,
"sync_sql_backup_to_azure" => rds.sync_sql_backup_to_azure,
"codebuild_access" => rds.codebuild_access
}
)
end
def checkout_revision_if_required
false
end
def workspace_name
[rds.cluster_id, rds.identifier, "rds", env_name].join("-")
end
def target_directory
File.join(
Infrastructure::APP_ROOT,
Infrastructure::PATH,
"rds"
)
end
end
end

View File

@@ -0,0 +1,70 @@
module Dalmatian
class RdsTest
include Testable
def initialize(
rds:,
env:,
helper: Helper,
logger: Logger,
terraform: Terraform
)
@rds = rds
@env = env
@helper = helper
@logger = logger
@terraform = terraform
end
private
attr_reader :rds, :env, :helper, :logger, :terraform
def env_config
env_details.merge(
"account_id" => rds.account_id,
"infrastructure_name" => rds.cluster_id,
"dalmatian_role" => "dalmatian-read",
"environment" => env_name,
"rds" => {
"identifier" => rds.identifier,
"in_use_by" => rds.in_use_by,
"clusters_in_use" => rds.clusters_in_use,
"instance_class" => rds.instance_class,
"engine" => rds.engine,
"engine_version" => rds.engine_version,
"allocated_storage" => rds.allocated_storage,
"storage_encrypted" => rds.storage_encrypted,
"storage_type" => rds.storage_type,
"db_name" => rds.db_name,
"port" => rds.port,
"maintenance_window" => rds.maintenance_window,
"backup_window" => rds.backup_window,
"backup_retention_period" => rds.backup_retention_period,
"force_ssl" => rds.force_ssl,
"parameter_store_path_db_url_name" => rds.parameter_store_path_db_url_name,
"sql_backup_scheduled_task_environment_variables" => rds.sql_backup_scheduled_task_environment_variables,
"check_sql_backup_scheduled_task_environment_variables" => rds.check_sql_backup_scheduled_task_environment_variables,
"sync_sql_backup_to_azure" => rds.sync_sql_backup_to_azure,
"codebuild_access" => rds.codebuild_access
}
)
end
def checkout_revision_if_required
false
end
def workspace_name
[rds.cluster_id, rds.identifier, "rds", env_name].join("-")
end
def target_directory
File.join(
Infrastructure::APP_ROOT,
Infrastructure::PATH,
"rds"
)
end
end
end

View File

@@ -0,0 +1,52 @@
module Dalmatian
class RdsDeployment
include Deployable
def initialize(
rds:,
env:,
auto_approve: false,
plan: false,
destroy: false,
verbose: false
)
@rds = rds
@env = env
@helper = Helper
@logger = Logger
@auto_approve = auto_approve
@plan = plan
@destroy = destroy
@verbose = verbose
end
private
attr_reader :rds, :env, :helper, :logger, :auto_approve, :plan, :destroy, :verbose
def target_directory
File.join(
Infrastructure::APP_ROOT,
Infrastructure::PATH,
"rds"
)
end
def env_config
env_details.merge(
"account_id" => rds.account_id,
"infrastructure_name" => rds.cluster_id,
"dalmatian_role" => dalmatian_role,
"rds" => rds.to_params
)
end
def checkout_revision_if_required
false
end
def workspace_name
[rds.cluster_id, rds.identifier, "rds", env_name].join("-")
end
end
end

View File

@@ -0,0 +1,70 @@
module Dalmatian
class RdsTest
include Testable
def initialize(
rds:,
env:,
helper: Helper,
logger: Logger,
terraform: Terraform
)
@rds = rds
@env = env
@helper = helper
@logger = logger
@terraform = terraform
end
private
attr_reader :rds, :env, :helper, :logger, :terraform
def env_config
env_details.merge(
"account_id" => rds.account_id,
"infrastructure_name" => rds.cluster_id,
"dalmatian_role" => "dalmatian-read",
"environment" => env_name,
"rds" => {
"identifier" => rds.identifier,
"in_use_by" => rds.in_use_by,
"clusters_in_use" => rds.clusters_in_use,
"instance_class" => rds.instance_class,
"engine" => rds.engine,
"engine_version" => rds.engine_version,
"allocated_storage" => rds.allocated_storage,
"storage_encrypted" => rds.storage_encrypted,
"storage_type" => rds.storage_type,
"db_name" => rds.db_name,
"port" => rds.port,
"maintenance_window" => rds.maintenance_window,
"backup_window" => rds.backup_window,
"backup_retention_period" => rds.backup_retention_period,
"force_ssl" => rds.force_ssl,
"parameter_store_path_db_url_name" => rds.parameter_store_path_db_url_name,
"sql_backup_scheduled_task_environment_variables" => rds.sql_backup_scheduled_task_environment_variables,
"check_sql_backup_scheduled_task_environment_variables" => rds.check_sql_backup_scheduled_task_environment_variables,
"sync_sql_backup_to_azure" => rds.sync_sql_backup_to_azure,
"codebuild_access" => rds.codebuild_access
}
)
end
def checkout_revision_if_required
false
end
def workspace_name
[rds.cluster_id, rds.identifier, "rds", env_name].join("-")
end
def target_directory
File.join(
Infrastructure::APP_ROOT,
Infrastructure::PATH,
"rds"
)
end
end
end

View File

@@ -0,0 +1,45 @@
module Dalmatian
class ClusterTest
include Testable
def initialize(
cluster:,
env:,
helper: Helper,
logger: Logger,
terraform: Terraform
)
@cluster = cluster
@env = env
@helper = helper
@logger = logger
@terraform = terraform
end
private
attr_reader :cluster, :env, :helper, :logger, :terraform
def env_config
env_details.merge(
"cluster_name" => "#{cluster.name}-#{env_name}",
"account_id" => cluster.account_id,
"infrastructure_name" => cluster.id,
"dalmatian_role" => "dalmatian-read",
"environment" => env_name
)
end
def checkout_revision_if_required
false
end
def workspace_name
[cluster.name, "ecs", env_name].join("-")
end
def target_directory
cluster.target_directory
end
end
end

View File

@@ -0,0 +1,106 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 351-450 of 3251 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 450.
--- FILE CONTENT (truncated) ---
let(:cluster1_validate_cmd_production_rds) do
terraform_validate_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.medium",
"min_servers" => "2",
"max_servers" => "4",
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",
"environment" => "production",
"rds" => {
"identifier" => "testservice",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [
"test"
],
"staging" => [
"test"
]
},
"instance_class" => {
"production" => "db.t2.small",
"staging" => "db.t2.micro"
},
"engine" => "postgres",
"engine_version" => "11.4",
"allocated_storage" => 20,
"storage_encrypted" => true,
"storage_type" => "gp3",
"db_name" => "testapp",
"port" => 5432,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"force_ssl" => true,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"check_sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::your-destination-bucket-name",
"replication_kms_key_id" => "your-destination-kms-key-id",
"codebuild_access" => [
"service-name"
]
}
)
)
)
end
let(:cluster1_validate_cmd_staging_waf) do
terraform_validate_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.small",
"min_servers" => "2",
"max_servers" => "4",
"track_revision" => "feature/experiment",
"extra_ecs_clusters" => [
{
"name" => "test",
"subnets_name" => "extra_private_subnets",
"min_servers" => "2",
"max_servers" => "4",
"instance_type" => "t3.small"
}
],
"tinyproxy" => {
"create" => true
},
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",
"environment" => "staging",
"waf" => {
"name" => "test-1",
"action" => "count",
"ip_deny_list" => [],
"aws_managed_rules" => [
{
"name" => "AWSManagedRulesSQLiRuleSet",
"excluded_path_patterns" => [
"/wp-admin/async-upload.php"
]
},

View File

@@ -0,0 +1,106 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 401-500 of 3243 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 500.
--- FILE CONTENT (truncated) ---
"value" => "bar"
}
],
"sync_sql_backup_to_azure" => false,
"codebuild_access" => [
"service-name"
]
}
)
)
)
end
let(:cluster1_validate_cmd_staging_waf) do
terraform_validate_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.small",
"min_servers" => "2",
"max_servers" => "4",
"track_revision" => "feature/experiment",
"extra_ecs_clusters" => [
{
"name" => "test",
"subnets_name" => "extra_private_subnets",
"min_servers" => "2",
"max_servers" => "4",
"instance_type" => "t3.small"
}
],
"tinyproxy" => {
"create" => true
},
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",
"environment" => "staging",
"waf" => {
"name" => "test-1",
"action" => "count",
"ip_deny_list" => [],
"aws_managed_rules" => [
{
"name" => "AWSManagedRulesSQLiRuleSet",
"excluded_path_patterns" => [
"/wp-admin/async-upload.php"
]
},
{
"name" => "AWSManagedRulesCommonRuleSet",
"exclude_rules" => [
"SizeRestrictions_BODY"
]
}
],
"associations" => {
shared_loadbalancers: [
"test-lb-1"
],
service_cloudfront: [
"test-service"
]
}
}
)
)
)
end
let(:cluster1_validate_cmd_staging_rds) do
terraform_validate_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.small",
"min_servers" => "2",
"max_servers" => "4",
"track_revision" => "feature/experiment",
"extra_ecs_clusters" => [
{
"name" => "test",
"subnets_name" => "extra_private_subnets",
"min_servers" => "2",
"max_servers" => "4",
"instance_type" => "t3.small"
}
],
"tinyproxy" => {
"create" => true
},
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",
"environment" => "staging",
"rds" => {
"identifier" => "testservice",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [

View File

@@ -0,0 +1,106 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 401-500 of 1663 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 500.
--- FILE CONTENT (truncated) ---
]
},
"instance_class" => {
"production" => "db.t2.small",
"staging" => "db.t2.micro"
},
"engine" => "postgres",
"engine_version" => "11.4",
"allocated_storage" => 20,
"storage_encrypted" => true,
"storage_type" => "gp3",
"db_name" => "testapp",
"port" => 5432,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"force_ssl" => true,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"check_sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"sync_sql_backup_to_azure" => false,
"codebuild_access" => [
"service-name"
]
}
)
)
)
end
let(:cluster1_apply_cmd_staging_rds) do
terraform_apply_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.small",
"min_servers" => "2",
"max_servers" => "4",
"track_revision" => "feature/experiment",
"extra_ecs_clusters" => [
{
"name" => "test",
"subnets_name" => "extra_private_subnets",
"min_servers" => "2",
"max_servers" => "4",
"instance_type" => "t3.small"
}
],
"tinyproxy" => {
"create" => true
},
"environment" => "staging",
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-admin",
"rds" => {
"identifier" => "testservice",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [
"test"
],
"staging" => [
"test"
]
},
"instance_class" => {
"production" => "db.t2.small",
"staging" => "db.t2.micro"
},
"engine" => "postgres",
"engine_version" => "11.4",
"allocated_storage" => 20,
"storage_encrypted" => true,
"storage_type" => "gp3",
"db_name" => "testapp",
"port" => 5432,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"force_ssl" => true,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"check_sql_backup_scheduled_task_environment_variables" => [

View File

@@ -0,0 +1,106 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 2501-2600 of 3243 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 2600.
--- FILE CONTENT (truncated) ---
},
"cloudfront" => {
"production" => {
"create" => true,
"managed_cache_policy" => "CachingDisabled",
"managed_origin_policy" => "AllViewerExceptHostHeader",
"managed_response_headers_policy" => "CORS-with-preflight-and-SecurityHeadersPolicy",
"tls_protocol_version" => "TLSv1.2_2019",
"origin_keepalive_timeout" => "60",
"origin_read_timeout" => "60",
"basic_auth" => false,
"basic_auth_users_extra" => {},
"viewer_request_functions" => [
{
"name" => "default",
"true_client_ip_header" => true,
"ip_subnet_allow_list" => [
"0.0.0.0/0"
],
"redirects" => [
{
"from_hostname_pattern" => "example-old-domain-name.*",
"from_path_pattern" => "/*",
"to_hostname" => "example-domain-name.co.uk",
"to_path" => "/${path}"
}
]
}
],
"offline_page_http_status" => {
"500" => "/error-pages/500.html",
"501" => "/error-pages/501.html",
"502" => "/error-pages/502.html",
"503" => "/error-pages/503.html",
"504" => "/error-pages/504.html"
},
"custom_origins" => [
{
"origin" => "test-media-production.s3.amazonaws.com",
"id" => "test-media-production-s3"
}
],
"bypass_protection" => {
"enabled" => true,
"exclude_domains" => [
"example.com"
]
},
"custom_behaviors" => [
{
"target_origin_id" => "test-media-production-s3",
"min_ttl" => 1200,
"default_ttl" => 3600,
"max_ttl" => 86400,
"associate_viewer_request_function" => "default",
"managed_cache_policy" => "CachingDisabled",
"managed_origin_policy" => "AllViewerExceptHostHeader",
"managed_response_headers_policy" => "CORS-with-preflight-and-SecurityHeadersPolicy",
"path_pattern" => "/media/*"
}
]
},
"staging" => {
"create" => true,
"managed_cache_policy" => "CachingDisabled",
"managed_origin_policy" => "AllViewerExceptHostHeader",
"managed_response_headers_policy" => "CORS-with-preflight-and-SecurityHeadersPolicy",
"tls_protocol_version" => "TLSv1.2_2021",
"origin_keepalive_timeout" => "10",
"origin_read_timeout" => "40",
"basic_auth" => true,
"basic_auth_users_extra" => {},
"viewer_request_functions" => [
{
"name" => "default",
"true_client_ip_header" => true,
"ip_subnet_allow_list" => [
"0.0.0.0/0"
],
"redirects" => [
{
"from_hostname_pattern" => "example-old-domain-name.*",
"from_path_pattern" => "/*",
"to_hostname" => "example-domain-name.co.uk",
"to_path" => "/${path}"
}
]
}
],
"offline_page_http_status" => {
"500" => "/error-pages/500.html",
"501" => "/error-pages/501.html",
"502" => "/error-pages/502.html",
"503" => "/error-pages/503.html",
"504" => "/error-pages/504.html"
},
"custom_origins" => [
{
"origin" => "test-media-staging.s3.amazonaws.com",
"id" => "test-media-staging-s3"

View File

@@ -0,0 +1,105 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 3101-3200 of 3243 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 3200.
--- FILE CONTENT (truncated) ---
allow(Helper).to receive(:run!).with(cluster3_plan_cmd_staging)
allow(Helper).to receive(:run!).with(cluster4_validate_cmd_production)
allow(Helper).to receive(:run!).with(cluster4_plan_cmd_production)
Infrastructure.new(
ConfigurationReader.new(path: "./dalmatian.yml.example", parameter_store: fake_parameter_store).infrastructures
).test
end
it 'invokes "terraform validate" with the expected cluster options' do
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_hz)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_s3)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_vpn_cg)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production).twice
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging).twice
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_waf)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_waf)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_rds)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_rds)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_elasticache_cluster)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_elasticache_cluster)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_opensearch_cluster)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_opensearch_cluster)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_service)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_service)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_shared_loadbalancer)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_shared_loadbalancer)
expect(Helper).to have_received(:run!).with(cluster2_validate_cmd_production)
expect(Helper).to have_received(:run!).with(cluster2_validate_cmd_staging)
expect(Helper).to have_received(:run!).with(cluster3_validate_cmd_staging)
expect(Helper).to have_received(:run!).with(cluster4_validate_cmd_production)
end
it 'invokes "terraform validate" with the expected source options' do
expect(Helper).to have_received(:run!).with(source1_validate_cmd_production).twice
expect(Helper).to have_received(:run!).with(source1_validate_cmd_staging).twice
end
describe 'use of "terraform plan"' do
it 'passes the hosted zone config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_hz)
end
end
it 'passes the s3 config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_s3)
end
end
it 'passes the vpn customer gateway config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_vpn_cg)
end
end
it 'passes the base config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_production).twice
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_staging).twice
end
end
it 'passes the waf config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_production_waf)
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_staging_waf)
end
end
it 'passes the rds config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_production_rds)
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_staging_rds)
end
end
it 'passes the elasticache config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!)
.with(cluster1_plan_cmd_production_elasticache_cluster)
expect(Helper).to have_received(:run!)
.with(cluster1_plan_cmd_staging_elasticache_cluster)
end
end

View File

@@ -0,0 +1,106 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 1-100 of 3243 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 100.
--- FILE CONTENT (truncated) ---
require_relative "../support/spec_helper"
module Dalmatian
RSpec.describe "tests local configuration" do
def fake_parameter_store
double(
"parameter_store",
get_parameter: "",
get_parameters_by_path: {}
)
end
def tfvar_arg_string(tfvars)
Terraform.tfvar_arg_string(tfvars)
end
def terraform_validate_cmd(tfvar_string)
"#{tfvar_string} terraform validate"
end
def terraform_plan_cmd(tfvar_string)
"#{tfvar_string} terraform plan"
end
def stub_workspace_creation_cmds
%w[
new-dedicated-cluster-example-domain-name-com-hz
new-dedicated-cluster-test-s3
new-dedicated-cluster-test-vpn-vpn-cg
new-dedicated-cluster-ecs-production
new-dedicated-cluster-ecs-staging
new-dedicated-cluster-0-production
new-dedicated-cluster-0-staging
new-dedicated-cluster-test-service-staging
new-dedicated-cluster-test-service-production
new-dedicated-cluster-test-1-waf-staging
new-dedicated-cluster-test-1-waf-production
new-dedicated-cluster-testservice-rds-staging
new-dedicated-cluster-testservice-rds-production
new-dedicated-cluster-testredis-elasticache-cluster-staging
new-dedicated-cluster-testredis-elasticache-cluster-production
new-dedicated-cluster-testos-opensearch-cluster-staging
new-dedicated-cluster-testos-opensearch-cluster-production
new-dedicated-cluster-test-lb-1-shared-loadbalancer-staging
new-dedicated-cluster-test-lb-1-shared-loadbalancer-production
shared-new-cluster-ecs-production
shared-new-cluster-ecs-staging
existing-shared-cluster-staging-0-staging
existing-shared-cluster-production-0-production
].each do |workspace_name|
allow(Helper).to receive(:run!).with("terraform workspace new #{workspace_name}")
end
end
def stub_git_checkout
allow(Helper).to receive(:run!).with("git checkout feature/experiment")
end
def stub_git_clone_cmds
[
{
source: "git@github.com:dxw/awesome-app-dalmatian-config",
name: "new-dedicated-cluster-0.config"
},
{
source: "git@github.com:dxw/funky-app-dalmatian-config",
name: "existing-shared-cluster-staging-0.config"
},
{
source: "git@github.com:dxw/neat-app-dalmatian-config",
name: "existing-shared-cluster-production-0.config"
},
{
source: "git@github.com:dxw/neat-app-dalmatian-config",
name: "existing-shared-cluster-production-0.config"
}
].each do |src|
allow(Helper).to receive(:run!)
.with("git clone #{src.fetch(:source)} terraform/infrastructures/#{src.fetch(:name)}")
end
end
def infrastructure_defaults
{
"region" => "eu-west-2",
"cidr" => "10.0.0.0/16",
"root_domain_zone" => "dalmatian.dxw.net",
"internal_domain_zone" => "dalmatian.internal",
"ecs_private_subnets" => [
{
"availability_zone" => "eu-west-2a",
"cidr" => "10.0.128.0/24"
},
{
"availability_zone" => "eu-west-2b",
"cidr" => "10.0.129.0/24"
},
{
"availability_zone" => "eu-west-2c",

View File

@@ -0,0 +1,98 @@
# Database backups
Automated RDS backups are taken daily within AWS. The time and retention periods vary depending on the configuration.
Manual backups are also possible when required.
As the RDS instances are not publicly available, manual backups must be done via the ECS instances.
Follow the [Shell access to ECS instances](shell-access-to-ecs-instances.md) guide to access one of the ECS instances.
## Manual backup of PostgreSQL databases
The postgres container is available in the ECR, so as long as you have https access to the VPC endpoints, you will be able to access it.
The URI of the postgres container is `<aws-account-number>.dkr.ecr.<aws-region>.amazonaws.com/<infrastructure-environment-name>-postgres`. This can also be found via the [ECR UI](https://console.aws.amazon.com/ecr/repositories).
You will need to run login to ECR's docker to pull the container
```
$(aws ecr get-login --no-include-email --region <aws-region>)
```
Create a directory on the instance to store the database dump:
```
mkdir -p /db-backup
```
To create a postgresql sql dump, run:
```
docker run -i -v /db-backup:/db-backup <postgres-container-uri> /bin/bash -c "pg_dump postgres://<username>:<password>@<rds-endpoint>:5432/<database-name> > /db-backup/<backup-name>.sql"
```
To transfer the backup, follow the [Transferring files to the ECS instances](transferring-files-to-the-ecs-instances.md) guide
## Cross-account S3 replication for SQL backups
SQL backups can be replicated to an S3 bucket in another AWS account for offsite storage or disaster recovery.
### Source Account Configuration
In your `dalmatian.yml`, specify the destination bucket ARN and (optionally) the KMS key ID for the destination bucket:
```yaml
rds:
replication_bucket_destination_arn: "arn:aws:s3:::your-destination-bucket-name"
replication_kms_key_id: "your-destination-kms-key-id"
```
### Destination Account Configuration
The destination account must permit the source account's replication role to write to the bucket and use the KMS key.
#### 1. Destination Bucket Policy
Add a policy to the destination bucket to allow the replication role from the source account:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowReplicationFromDalmatianSource",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<SOURCE_ACCOUNT_ID>:role/<INFRASTRUCTURE_NAME>-<RDS_ID>-sql-backup-replication"
},
"Action": [
"s3:ReplicateObject",
"s3:ReplicateDelete",
"s3:ReplicateTags",
"s3:ObjectOwnerOverrideToBucketOwner"
],
"Resource": "arn:aws:s3:::<DESTINATION_BUCKET_NAME>/*"
}
]
}
```
#### 2. Destination KMS Key Policy (Optional)
If the destination bucket uses a Customer Managed Key (CMK) for encryption, the key policy must allow the source replication role to use it:
```json
{
"Sid": "AllowUsageByDalmatianSourceReplicationRole",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<SOURCE_ACCOUNT_ID>:role/<INFRASTRUCTURE_NAME>-<RDS_ID>-sql-backup-replication"
},
"Action": [
"kms:Encrypt",
"kms:GenerateDataKey"
],
"Resource": "*"
}
```

View File

@@ -0,0 +1,106 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 101-200 of 437 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 200.
--- FILE CONTENT (truncated) ---
value: cf-distribution.aws.net
- name: www
value: cf-distribution.aws.net
cname_records:
- name: alb
value:
- aws-alb.aws.net
mx_records:
- name: mail
value:
- 0 mail.example-domain-name.com
txt_records:
- name: mail
value:
- "v=spf1 a ip4:9.10.11.0/24 mx ~all"
srv_records:
- name: "@"
value:
- "_imaps._tcp.gmail.com. 86400 IN SRV 5 0 993 imap.gmail.com"
cluster:
create: true
rds:
- identifier: testservice
in_use_by:
- test-service
engine: 'postgres'
instance_class:
staging: 'db.t2.micro'
production: 'db.t2.small'
engine_version: '11.4'
allocated_storage: 20
storage_encrypted: true
storage_type: 'gp3'
db_name: 'testapp'
port: 5432
maintenance_window: 'mon:19:00-mon:19:30'
backup_window: '09:00-10:00'
backup_retention_period: 31
force_ssl: true
parameter_store_path_db_url_name: 'DATABASE_URL'
sql_backup_scheduled_task_environment_variables:
- name: "foo"
value: "bar"
check_sql_backup_scheduled_task_environment_variables:
- name: "foo"
value: "bar"
replication_bucket_destination_arn: "arn:aws:s3:::your-destination-bucket-name"
replication_kms_key_id: "your-destination-kms-key-id"
codebuild_access:
- service-name
aurora:
- identifier: testaurora
in_use_by:
- test-service
engine: 'aurora-postgresql'
engine_version: '11.9'
db_name: 'testapp'
port: 5432
maintenance_window: 'mon:19:00-mon:19:30'
backup_window: '09:00-10:00'
backup_retention_period: 31
force_ssl: true
minimum_size:
staging: 1
production: 2
maximum_size:
staging: 1
production: 2
parameter_store_path_db_url_name: 'DATABASE_URL'
replication_bucket_destination_arn: "arn:aws:s3:::your-destination-bucket-name"
replication_kms_key_id: "your-destination-kms-key-id"
elasticache_cluster:
- identifier: testredis
in_use_by:
- test-service
engine: 'redis'
node_type: 'cache.t2.micro'
node_count: 1
engine: 'redis'
engine_version: '5.0.6'
port: 6379
maintenance_window: 'mon:19:00-mon:22:00'
snapshot_window: '09:00-10:00'
parameter_store_path_elasticache_cluster_url_name: 'REDIS_URL'
opensearch_cluster:
- identifier: testos
in_use_by:
- test-service
version: '1.2'
master_enabled: true
master_count: '1'
master_type: 'c6g.large.search'
instance_count: '3'
instance_type: 't3.small.search'
warm_enabled: true
warm_count: '2'
warm_type: 'ultrawarm1.medium.search'
volume_size: '20'
parameter_store_path_opensearch_cluster_url_name: 'ELASTICSEARCH_URL'
services:

View File

@@ -0,0 +1,4 @@
---
dalmatian_config_yml_source:
type: url
location: https://example.com/dalmatian.yml

View File

@@ -0,0 +1,66 @@
module Dalmatian
class Terraform
def self.fmt(args = nil)
Helper.run!("terraform fmt #{args}")
end
def self.init(upgrade: false)
cmd = "terraform init"
cmd << " -upgrade=true" if upgrade
Helper.run!(cmd)
end
def self.validate(var_file: "", tfvars: {})
cmd = tfvar_arg_string(tfvars)
cmd << " terraform validate"
cmd << " -var-file=#{var_file}" unless var_file.to_s.strip.empty?
Helper.run!(cmd)
end
def self.plan(var_file: "", tfvars: {}, verbose: false)
cmd = tfvar_arg_string(tfvars)
cmd << " terraform plan"
cmd << " -var-file=#{var_file}" unless var_file.to_s.strip.empty?
Logger.debug(cmd) if verbose
Helper.run!(cmd)
end
def self.apply(var_file: "", tfvars: {}, auto_approve: false, verbose: false)
cmd = tfvar_arg_string(tfvars)
cmd << " terraform apply"
cmd << " -var-file=#{var_file}" unless var_file.to_s.strip.empty?
cmd << " -auto-approve" if auto_approve
Logger.debug(cmd) if verbose
Helper.run!(cmd)
end
def self.destroy(var_file: "", tfvars: {}, verbose: false)
cmd = tfvar_arg_string(tfvars)
cmd << " terraform destroy"
cmd << " -var-file=#{var_file}" unless var_file.to_s.strip.empty?
Logger.debug(cmd) if verbose
Helper.run!(cmd)
end
def self.tfvar_arg_string(tfvars)
tfvars.map { |key, value|
value = value.to_json.gsub(/"([^"]+)":/, '\1 =') if value.respond_to?(:each)
"TF_VAR_#{key}='#{value}'"
}.join(" ")
end
def self.ensure_presence_of_workspace(workspace_name)
Logger.info("Creating #{workspace_name} workspace")
Helper.run!("terraform workspace new #{workspace_name}")
rescue Error
Logger.info("Selecting #{workspace_name} workspace")
Helper.run!("terraform workspace select #{workspace_name}")
end
def self.list_workspaces
Dir.chdir(Infrastructure::BOOTSTRAP_PATH) do
Helper.run!("terraform workspace list")
end
end
end
end

View File

@@ -0,0 +1,5 @@
---
dalmatian_config_yml_source:
type: S3
bucket: dalmatian-config-bucket
key: dalmatian.yml

View File

@@ -0,0 +1,5 @@
---
dalmatian_config_yml_source:
type: git
repo: git@github.com:dxw/dalmatian-config
filename: dalmatian.yml

View File

@@ -0,0 +1,394 @@
module Dalmatian
class Cluster
def initialize(description, helper: Helper, logger: Logger)
properties = description.last
@id = description.first
@account_id = properties["account_id"]
@environments = properties["environments"]
@properties = properties["cluster"]
@logger = logger
@helper = helper
@hosted_zones = build_hosted_zones(properties["hosted_zones"])
@s3s = build_s3s(properties["s3"])
@vpn_customer_gateways = build_vpn_customer_gateways(properties["vpn_customer_gateway"])
@sources = build_sources(properties["dalmatian_config_source"])
@services = build_services(properties["services"])
@wafs = build_wafs(properties["waf"])
@rdss = build_rdss(properties["rds"])
@auroras = build_auroras(properties["aurora"])
@elasticache_clusters = build_elasticache_clusters(properties["elasticache_cluster"])
@opensearch_clusters = build_opensearch_clusters(properties["opensearch_cluster"])
@shared_loadbalancers = build_shared_loadbalancers(properties["shared_loadbalancer"])
end
attr_reader :hosted_zones, :s3s, :vpn_customer_gateways, :sources, :services, :wafs, :rdss, :auroras, :elasticache_clusters,
:opensearch_clusters, :shared_loadbalancers, :environments, :properties, :account_id, :id
def name
properties["name"] || id
end
def fetch(infrastructure_name: "all")
sources.each do |source|
next unless ["all", source.cluster_id].include? infrastructure_name
return verify_source_path(source) unless remotely_held?(source)
destination = "#{Infrastructure::PATH}/#{source.name}.config"
fetch_source(source, destination)
end
end
def deploy(environment_name: "all", service_name: "all", skip_deployments: "", test: false, auto_approve: false, plan: false, destroy: false, verbose: false)
skip_deployments = skip_deployments.split(/\s*,\s*/)
hosted_zones.each do |hosted_zone|
unless skip_deployments.include?("hosted-zone")
deploy_hosted_zone(hosted_zone, test, auto_approve, plan, destroy, verbose)
end
end
s3s.each do |s3|
unless skip_deployments.include?("s3")
deploy_s3(s3, test, auto_approve, plan, destroy, verbose)
end
end
vpn_customer_gateways.each do |vpn_customer_gateway|
unless skip_deployments.include?("vpn-customer-gateway")
deploy_vpn_customer_gateway(vpn_customer_gateway, test, auto_approve, plan, destroy, verbose)
end
end
environments.each do |name, details|
next unless ["all", name].include?(environment_name)
environment = {name: name, details: details}
deploy_cluster(environment, test, auto_approve, plan, destroy, verbose) if create? && !skip_deployments.include?("ecs")
wafs.each do |waf|
unless skip_deployments.include?("waf")
deploy_waf(waf, environment, test, auto_approve, plan, destroy, verbose)
end
end
rdss.each do |rds|
unless skip_deployments.include?("rds")
deploy_rds(rds, environment, test, auto_approve, plan, destroy, verbose)
end
end
auroras.each do |aurora|
unless skip_deployments.include?("aurora")
deploy_aurora(aurora, environment, test, auto_approve, plan, destroy, verbose)
end
end
elasticache_clusters.each do |elasticache_cluster|
unless skip_deployments.include?("elasticache-cluster")
deploy_elasticache_cluster(elasticache_cluster, environment, test, auto_approve, plan, destroy, verbose)
end
end
opensearch_clusters.each do |opensearch_cluster|
unless skip_deployments.include?("opensearch-cluster")
deploy_opensearch_cluster(opensearch_cluster, environment, test, auto_approve, plan, destroy, verbose)
end
end
services.each do |service|
next unless service.to_params["launch_on"].include?(name)
if [service.to_params["name"], "all"].include?(service_name) && !skip_deployments.include?("ecs-services")
deploy_service(service, environment, test, auto_approve, plan, destroy, verbose)
end
end
shared_loadbalancers.each do |shared_loadbalancer|
unless skip_deployments.include?("shared-loadbalancer")
deploy_shared_loadbalancer(shared_loadbalancer, environment, test, auto_approve, plan, destroy, verbose)
end
end
sources.each do |source|
deploy_source(source, environment, test, auto_approve, plan, destroy, verbose)
end
end
end
def target_directory
File.join(Infrastructure::APP_ROOT, Infrastructure::PATH, "ecs")
end
private
attr_reader :logger, :helper
def build_hosted_zones(hosted_zones_references)
(hosted_zones_references || []).map do |reference|
HostedZone.new(cluster: self, reference: reference)
end
end
def build_s3s(s3_references)
(s3_references || []).map do |reference|
S3.new(cluster: self, reference: reference)
end
end
def build_vpn_customer_gateways(vpn_customer_gateway_references)
(vpn_customer_gateway_references || []).map do |reference|
VpnCustomerGateway.new(cluster: self, reference: reference)
end
end
def build_sources(source_references)
(source_references || []).map do |reference|
Source.new(cluster: self, reference: reference)
end
end
def build_services(service_references)
(service_references || []).map do |reference|
Service.new(cluster: self, reference: reference)
end
end
def build_elasticache_clusters(elasticache_references)
(elasticache_references || []).map do |reference|
ElasticacheCluster.new(cluster: self, reference: reference)
end
end
def build_opensearch_clusters(opensearch_references)
(opensearch_references || []).map do |reference|
OpensearchCluster.new(cluster: self, reference: reference)
end
end
def build_wafs(waf_references)
(waf_references || []).map do |reference|
WAF.new(cluster: self, reference: reference)
end
end
def build_rdss(rds_references)
(rds_references || []).map do |reference|
Rds.new(cluster: self, reference: reference)
end
end
def build_auroras(aurora_references)
(aurora_references || []).map do |reference|
Aurora.new(cluster: self, reference: reference)
end
end
def build_shared_loadbalancers(shared_loadbalancer_references)
(shared_loadbalancer_references || []).map do |reference|
SharedLoadbalancer.new(cluster: self, reference: reference)
end
end
def deploy_hosted_zone(hosted_zone, test, auto_approve, plan, destroy, verbose)
test_hosted_zone(hosted_zone) if test
HostedZoneDeployment.new(
hosted_zone: hosted_zone,
plan: plan,
auto_approve: auto_approve,
destroy: destroy,
verbose: verbose
).call
end
def deploy_s3(s3, test, auto_approve, plan, destroy, verbose)
test_s3(s3) if test
S3Deployment.new(
s3: s3,
plan: plan,
auto_approve: auto_approve,
destroy: destroy,
verbose: verbose
).call
end
def deploy_vpn_customer_gateway(vpn_customer_gateway, test, auto_approve, plan, destroy, verbose)
test_vpn_customer_gateway(vpn_customer_gateway) if test
VpnCustomerGatewayDeployment.new(
vpn_customer_gateway: vpn_customer_gateway,
plan: plan,
auto_approve: auto_approve,
destroy: destroy,
verbose: verbose
).call
end
def deploy_source(source, env, test, auto_approve, plan, destroy, verbose)
test_source(source, env) if test
SourceDeployment.new(
source: source,
env: env,
plan: plan,
auto_approve: auto_approve,
destroy: destroy,
verbose: verbose
).call
end
def deploy_cluster(env, test, auto_approve, plan, destroy, verbose)
test_cluster(env) if test
ClusterDeployment.new(
cluster: self,
env: env,
plan: plan,
auto_approve: auto_approve,
destroy: destroy,
verbose: verbose
).call
end
def deploy_service(service, env, test, auto_approve, plan, destroy, verbose)
test_service(service, env) if test
ServiceDeployment.new(
service: service,
env: env,
plan: plan,
auto_approve: auto_approve,
destroy: destroy,
verbose: verbose
).call
end
def deploy_waf(waf, env, test, auto_approve, plan, destroy, verbose)
test_waf(waf, env) if test
WAFDeployment.new(
waf: waf,
env: env,
plan: plan,
auto_approve: auto_approve,
destroy: destroy,
verbose: verbose
).call
end
def deploy_rds(rds, env, test, auto_approve, plan, destroy, verbose)
test_rds(rds, env) if test
RdsDeployment.new(
rds: rds,
env: env,
plan: plan,
auto_approve: auto_approve,
destroy: destroy,
verbose: verbose
).call
end
def deploy_aurora(aurora, env, test, auto_approve, plan, destroy, verbose)
test_aurora(aurora, env) if test
AuroraDeployment.new(
aurora: aurora,
env: env,
plan: plan,
auto_approve: auto_approve,
destroy: destroy,
verbose: verbose
).call
end
def deploy_elasticache_cluster(elasticache_cluster, env, test, auto_approve, plan, destroy, verbose)
test_elasticache_cluster(elasticache_cluster, env) if test
ElasticacheClusterDeployment.new(
elasticache_cluster: elasticache_cluster,
env: env,
plan: plan,
auto_approve: auto_approve,
destroy: destroy,
verbose: verbose
).call
end
def deploy_opensearch_cluster(opensearch_cluster, env, test, auto_approve, plan, destroy, verbose)
test_opensearch_cluster(opensearch_cluster, env) if test
OpensearchClusterDeployment.new(
opensearch_cluster: opensearch_cluster,
env: env,
plan: plan,
auto_approve: auto_approve,
destroy: destroy,
verbose: verbose
).call
end
def deploy_shared_loadbalancer(shared_loadbalancer, env, test, auto_approve, plan, destroy, verbose)
test_shared_loadbalancer(shared_loadbalancer, env) if test
SharedLoadbalancerDeployment.new(
shared_loadbalancer: shared_loadbalancer,
env: env,
plan: plan,
auto_approve: auto_approve,
destroy: destroy,
verbose: verbose
).call
end
def create?
properties["create"]
end
def test_hosted_zone(hosted_zone)
HostedZoneTest.new(hosted_zone: hosted_zone).call
end
def test_s3(s3)
S3Test.new(s3: s3).call
end
def test_vpn_customer_gateway(vpn_customer_gateway)
VpnCustomerGatewayTest.new(vpn_customer_gateway: vpn_customer_gateway).call
end
def test_cluster(env)
ClusterTest.new(cluster: self, env: env).call
end
def test_source(source, env)
SourceTest.new(source: source, env: env).call
end
def test_service(service, env)
ServiceTest.new(service: service, env: env).call
end
def test_waf(waf, env)
WAFTest.new(waf: waf, env: env).call
end
def test_rds(rds, env)
RdsTest.new(rds: rds, env: env).call
end
def test_aurora(aurora, env)
puts "debug Testing Aurora"
AuroraTest.new(aurora: aurora, env: env).call
end
def test_elasticache_cluster(elasticache_cluster, env)
ElasticacheClusterTest.new(elasticache_cluster: elasticache_cluster, env: env).call
end
def test_opensearch_cluster(opensearch_cluster, env)
OpensearchClusterTest.new(opensearch_cluster: opensearch_cluster, env: env).call
end
def test_shared_loadbalancer(shared_loadbalancer, env)
SharedLoadbalancerTest.new(shared_loadbalancer: shared_loadbalancer, env: env).call
end
def fetch_source(source, destination)
logger.info("Cloning #{source.reference} ...")
helper.remove(destination)
helper.git_clone(source.reference, destination)
helper.change_to(destination)
helper.terrafile
helper.change_to(Infrastructure::APP_ROOT)
end
def verify_source_path(source)
if File.directory?(source.reference)
logger.info "Found #{source.reference}"
else
logger.error "#{source.reference} does not exist"
end
end
def remotely_held?(source)
source.reference.start_with?("https://", "git@", "git://", "git+ssh://")
end
end
end

View File

@@ -0,0 +1,34 @@
module Dalmatian
class Infrastructure
def initialize(configuration = ConfigurationReader.new.infrastructures)
@clusters = configuration.map { |description| Cluster.new(description) }
end
BOOTSTRAP_PATH = "terraform/account_bootstrap".freeze
PATH = "terraform/infrastructures".freeze
APP_ROOT = File.expand_path(File.dirname("."))
def fetch(infrastructure_name: "all")
clusters.each do |cluster|
cluster.fetch(infrastructure_name: infrastructure_name)
end
end
def test(environment_name: "all", infrastructure_name: "all", service_name: "all", skip_deployments: "")
deploy(environment_name: environment_name, infrastructure_name: infrastructure_name, service_name: service_name, skip_deployments: skip_deployments, test: true, auto_approve: false, plan: true, destroy: false)
end
def deploy(environment_name: "all", infrastructure_name: "all", service_name: "all", skip_deployments: "", test: false, auto_approve: false, plan: false, destroy: false, verbose: false)
clusters.each do |cluster|
next if cluster.id != infrastructure_name && infrastructure_name != "all"
cluster.deploy(environment_name: environment_name, service_name: service_name, skip_deployments: skip_deployments, test: test, auto_approve: auto_approve, plan: plan, destroy: destroy, verbose: verbose)
end
end
def remove(environment_name: "all", infrastructure_name: "all", service_name: "all", skip_deployments: "")
deploy(environment_name: environment_name, infrastructure_name: infrastructure_name, service_name: service_name, skip_deployments: skip_deployments, test: false, auto_approve: false, plan: false, destroy: true)
end
attr_reader :clusters
end
end

View File

@@ -0,0 +1,149 @@
module Dalmatian
class Aurora
def initialize(cluster:, reference:)
@cluster = cluster
@reference = reference
end
attr_reader :cluster, :reference
def identifier
reference["identifier"]
end
def in_use_by
reference["in_use_by"] || []
end
def clusters_in_use
clusters_in_use = {}
cluster_environments.each do |env_name, _env_config|
clusters_in_use[env_name] = []
services.each do |service|
in_use_by.each do |service_name|
next if service.name != service_name
clusters_in_use[env_name] << if service.launch_on_cluster.empty?
"default_dalmatian_ecs_cluster"
else
service.launch_on_cluster
end
end
end
if in_use_by.empty?
clusters_in_use[env_name] << "default_dalmatian_ecs_cluster" ## fails SimpleCov
end
clusters_in_use[env_name] = clusters_in_use[env_name].uniq
end
clusters_in_use
end
def minimum_size
minimum_size = {}
reference["minimum_size"] = {} if reference["minimum_size"].nil?
cluster_environments.each do |env_name, _env_config|
minimum_size[env_name] = reference["minimum_size"][env_name] || ""
end
minimum_size
end
def maximum_size
maximum_size = {}
reference["maximum_size"] = {} if reference["maximum_size"].nil?
cluster_environments.each do |env_name, _env_config|
maximum_size[env_name] = reference["maximum_size"][env_name] || ""
end
maximum_size
end
def engine
reference["engine"]
end
def engine_version
reference["engine_version"]
end
def db_name
reference["db_name"]
end
def port
reference["port"] || ""
end
def maintenance_window
reference["maintenance_window"] || ""
end
def backup_window
reference["backup_window"] || ""
end
def backup_retention_period
reference["backup_retention_period"] || ""
end
def parameter_store_path_db_url_name
reference["parameter_store_path_db_url_name"] || ""
end
def sql_backup_scheduled_task_environment_variables
reference["sql_backup_scheduled_task_environment_variables"] || []
end
def sync_sql_backup_to_azure
reference["sync_sql_backup_to_azure"] || false
end
def replication_bucket_destination_arn
reference["replication_bucket_destination_arn"] || ""
end
def replication_kms_key_id
reference["replication_kms_key_id"] || ""
end
def force_ssl
reference["force_ssl"] || "false"
end
def cluster_id
cluster.id
end
def cluster_environments
cluster.environments
end
def account_id
cluster.account_id
end
def services
cluster.services
end
def to_params
{
"identifier" => identifier,
"in_use_by" => in_use_by,
"clusters_in_use" => clusters_in_use,
"minimum_size" => minimum_size,
"maximum_size" => maximum_size,
"engine" => engine,
"engine_version" => engine_version,
"db_name" => db_name,
"port" => port,
"force_ssl" => force_ssl,
"maintenance_window" => maintenance_window,
"backup_window" => backup_window,
"backup_retention_period" => backup_retention_period,
"parameter_store_path_db_url_name" => parameter_store_path_db_url_name,
"sql_backup_scheduled_task_environment_variables" => sql_backup_scheduled_task_environment_variables,
"sync_sql_backup_to_azure" => sync_sql_backup_to_azure,
"replication_bucket_destination_arn" => replication_bucket_destination_arn,
"replication_kms_key_id" => replication_kms_key_id
}
end
end
end

View File

@@ -0,0 +1,164 @@
module Dalmatian
class Rds
def initialize(cluster:, reference:)
@cluster = cluster
@reference = reference
end
attr_reader :cluster, :reference
def identifier
reference["identifier"]
end
def in_use_by
reference["in_use_by"] || []
end
def clusters_in_use
clusters_in_use = {}
cluster_environments.each do |env_name, _env_config|
clusters_in_use[env_name] = []
services.each do |service|
in_use_by.each do |service_name|
next if service.name != service_name
clusters_in_use[env_name] << if service.launch_on_cluster.empty?
"default_dalmatian_ecs_cluster"
else
service.launch_on_cluster
end
end
end
if in_use_by.empty?
clusters_in_use[env_name] << "default_dalmatian_ecs_cluster" ## fails SimpleCov
end
clusters_in_use[env_name] = clusters_in_use[env_name].uniq
end
clusters_in_use
end
def instance_class
instance_class = {}
reference["instance_class"] = {} if reference["instance_class"].nil?
cluster_environments.each do |env_name, _env_config|
instance_class[env_name] = reference["instance_class"][env_name] || ""
end
instance_class
end
def engine
reference["engine"]
end
def engine_version
reference["engine_version"]
end
def allocated_storage
reference["allocated_storage"] || ""
end
def storage_encrypted
reference["storage_encrypted"].nil? || reference["storage_encrypted"]
end
def storage_type
reference["storage_type"] || "gp3"
end
def db_name
reference["db_name"]
end
def port
reference["port"] || ""
end
def maintenance_window
reference["maintenance_window"] || ""
end
def backup_window
reference["backup_window"] || ""
end
def backup_retention_period
reference["backup_retention_period"] || ""
end
def force_ssl
reference["force_ssl"] || "false"
end
def parameter_store_path_db_url_name
reference["parameter_store_path_db_url_name"] || ""
end
def sql_backup_scheduled_task_environment_variables
reference["sql_backup_scheduled_task_environment_variables"] || []
end
def check_sql_backup_scheduled_task_environment_variables
reference["check_sql_backup_scheduled_task_environment_variables"] || []
end
def sync_sql_backup_to_azure
reference["sync_sql_backup_to_azure"] || false
end
def replication_bucket_destination_arn
reference["replication_bucket_destination_arn"] || ""
end
def replication_kms_key_id
reference["replication_kms_key_id"] || ""
end
def codebuild_access
reference["codebuild_access"] || []
end
def cluster_id
cluster.id
end
def cluster_environments
cluster.environments
end
def account_id
cluster.account_id
end
def services
cluster.services
end
def to_params
{
"identifier" => identifier,
"in_use_by" => in_use_by,
"clusters_in_use" => clusters_in_use,
"instance_class" => instance_class,
"engine" => engine,
"engine_version" => engine_version,
"allocated_storage" => allocated_storage,
"storage_encrypted" => storage_encrypted,
"storage_type" => storage_type,
"db_name" => db_name,
"port" => port,
"maintenance_window" => maintenance_window,
"backup_window" => backup_window,
"backup_retention_period" => backup_retention_period,
"force_ssl" => force_ssl,
"parameter_store_path_db_url_name" => parameter_store_path_db_url_name,
"sql_backup_scheduled_task_environment_variables" => sql_backup_scheduled_task_environment_variables,
"check_sql_backup_scheduled_task_environment_variables" => check_sql_backup_scheduled_task_environment_variables,
"sync_sql_backup_to_azure" => sync_sql_backup_to_azure,
"replication_bucket_destination_arn" => replication_bucket_destination_arn,
"replication_kms_key_id" => replication_kms_key_id,
"codebuild_access" => codebuild_access
}
end
end
end

View File

@@ -0,0 +1,413 @@
---
parameter-store-path-prefix: /dalmatian-variables/
account-bootstrap:
variables:
region: eu-west-2
root_domain_zone: dalmatian.dxw.net
dalmatian_read_users:
- Joe
- Ashley
dalmatian_admin_users:
- Harper
- Pat
ci:
variables:
region: eu-west-2
prci_github_owner: dxw
prci_github_repository: dalmatian
prci_codebuild_image: thedxw/testing-terraform-docker:latest
bpsp_source_branch: master
bpbp_codebuild_compute_type: BUILD_GENERAL1_SMALL
bpbp_codebuild_image: thedxw/testing-terraform-docker:latest
infrastructure-defaults:
variables:
region: eu-west-2
cidr: 10.0.0.0/16
root_domain_zone: dalmatian.dxw.net
internal_domain_zone: dalmatian.internal
ecs_private_subnets:
- availability_zone: "eu-west-2a"
cidr: 10.0.128.0/24
- availability_zone: "eu-west-2b"
cidr: 10.0.129.0/24
- availability_zone: "eu-west-2c"
cidr: 10.0.130.0/24
extra_public_subnets:
- availability_zone: eu-west-2a
cidr: 10.0.0.0/24
- availability_zone: eu-west-2b
cidr: 10.0.1.0/24
- availability_zone: eu-west-2c
cidr: 10.0.2.0/24
instances_key_name: dalmatian-ecs-instances
instance_type: t2.medium
min_servers: 2
max_servers: 4
max_instance_lifetime: 86400
associate_public_ip_address: 0
docker_storage_size: 40
dockerhub_email: ''
dockerhub_token: ''
enable_efs: "false"
encrypt_efs: true
efs_dirs: []
monitoring_docs_path: https://github.com/dxw/dalmatian/docs/monitoring-alarms/
infrastructures:
new-dedicated-cluster:
dalmatian_config_source:
- git@github.com:dxw/awesome-app-dalmatian-config
account_id: 123456789012
vpn_customer_gateway:
- name: test-vpn
bgp_asn: 65000
ip_address: 1.2.3.4
s3:
- name: 'test'
enable_s3_versioning: true
encrypted: true
acl: 'private'
policy:
staging:
rw:
services:
- test-service
service_cloudfront_read_access:
- test-service-staging
cloudfront:
create: true
domain_names:
- example.com
- example2.com
certificate: 'arn:aws:acm:lb-region-0:000000000000:certificate/00000000-0000-0000-0000-000000000000'
hosted_zones:
- domain: "example-domain-name.com"
ns_records:
- name: delegated
value:
- ns1.aws.com
a_records:
- name: some-service
value:
- 1.2.3.4
- name: mail
value:
- 5.6.7.8
alias_records:
- name: example-domain-name.com
value: cf-distribution.aws.net
- name: www
value: cf-distribution.aws.net
cname_records:
- name: alb
value:
- aws-alb.aws.net
mx_records:
- name: mail
value:
- 0 mail.example-domain-name.com
txt_records:
- name: mail
value:
- "v=spf1 a ip4:9.10.11.0/24 mx ~all"
srv_records:
- name: "@"
value:
- "_imaps._tcp.gmail.com. 86400 IN SRV 5 0 993 imap.gmail.com"
cluster:
create: true
rds:
- identifier: testservice
in_use_by:
- test-service
engine: 'postgres'
instance_class:
staging: 'db.t2.micro'
production: 'db.t2.small'
engine_version: '11.4'
allocated_storage: 20
storage_encrypted: true
storage_type: 'gp3'
db_name: 'testapp'
port: 5432
maintenance_window: 'mon:19:00-mon:19:30'
backup_window: '09:00-10:00'
backup_retention_period: 31
force_ssl: true
parameter_store_path_db_url_name: 'DATABASE_URL'
sql_backup_scheduled_task_environment_variables:
- name: "foo"
value: "bar"
check_sql_backup_scheduled_task_environment_variables:
- name: "foo"
value: "bar"
codebuild_access:
- service-name
elasticache_cluster:
- identifier: testredis
in_use_by:
- test-service
engine: 'redis'
node_type: 'cache.t2.micro'
node_count: 1
engine: 'redis'
engine_version: '5.0.6'
port: 6379
maintenance_window: 'mon:19:00-mon:22:00'
snapshot_window: '09:00-10:00'
parameter_store_path_elasticache_cluster_url_name: 'REDIS_URL'
opensearch_cluster:
- identifier: testos
in_use_by:
- test-service
version: '1.2'
master_enabled: true
master_count: '1'
master_type: 'c6g.large.search'
instance_count: '3'
instance_type: 't3.small.search'
warm_enabled: true
warm_count: '2'
warm_type: 'ultrawarm1.medium.search'
volume_size: '20'
parameter_store_path_opensearch_cluster_url_name: 'ELASTICSEARCH_URL'
services:
- name: test-service
blue_green:
production:
enabled: true
db_copy:
from_db_host_ps_key: /test-app/other-test-service/production/DB_HOST
from_db_name_ps_key: /test-app/other-test-service/production/DB_NAME
from_db_user_ps_key: /test-app/other-test-service/production/DB_USER
from_db_pass_ps_key: /test-app/other-test-service/production/DB_PASSWORD
blue_green_db_name_ps_key: /test-app/test-service/production/DB_HOST
blue_green_db_user_ps_key: /test-app/test-service/production/DB_USER
blue_green_db_pass_ps_key: /test-app/test-service/production/DB_PASSWORD
sql_backups_s3_bucket: new-dedicated-cluster-testservice-production-sql-backup
db_rewrites:
- from: other-test-service.example.com
to: test-service.example.com
directory_copy:
- from: /mnt/efs/other-test-service-media
to: /mnt/efs/test-service-media
chown: "33:33"
asset_copy_trigger_ps_key: /test-app/test-service/production/BLUE_GREEN_ASSET_COPY_TRIGGER
staging:
enabled: false
launch_on:
- production
- staging
launch_on_cluster: "test"
monitoring:
production:
opsgenie_alerts:
enabled: true
cloudfront_5xx:
enabled: true
threshold: "95"
evaluation_periods: "15"
staging:
opsgenie_alerts:
enabled: false
ghost_inspector:
enabled: false
parameter_store_path:
staging: '/test-path'
parameter_store_key:
staging: 'arn:aws:kms:eu-west-2:000000000000:key/00000000-0000-0000-0000-000000000000'
container_count: "2"
enable_max_one_container_per_instance: true
cloudfront:
create: true
managed_cache_policy: "CachingDisabled"
managed_origin_policy: "AllViewerExceptHostHeader"
managed_response_headers_policy: "CORS-with-preflight-and-SecurityHeadersPolicy"
tls_protocol_version:
production: 'TLSv1.2_2019'
staging: 'TLSv1.2_2021'
bypass_protection:
production:
enabled: true
exclude_domains:
- example.com
origin_keepalive_timeout:
staging: "10"
production: "60"
origin_read_timeout:
staging: "40"
production: "60"
basic_auth:
staging: true
viewer_request_functions:
- name: 'default'
true_client_ip_header: true
ip_subnet_allow_list:
- '0.0.0.0/0'
redirects:
- from_hostname_pattern: example-old-domain-name.*
from_path_pattern: /*
to_hostname: example-domain-name.co.uk
to_path: /${path}
offline_page_http_status:
500: "/error-pages/500.html"
501: "/error-pages/501.html"
502: "/error-pages/502.html"
503: "/error-pages/503.html"
504: "/error-pages/504.html"
custom_origins:
staging:
- origin: test-media-staging.s3.amazonaws.com
id: test-media-staging-s3
production:
- origin: test-media-production.s3.amazonaws.com
id: test-media-production-s3
custom_behaviors:
staging:
- path_patterns:
- '/media/*'
target_origin_id: test-media-staging-s3
min_ttl: 1200
default_ttl: 3600
max_ttl: 86400
associate_viewer_request_function: "default"
production:
- path_patterns:
- '/media/*'
target_origin_id: test-media-production-s3
min_ttl: 1200
default_ttl: 3600
max_ttl: 86400
associate_viewer_request_function: "default"
managed_cache_policy: "CachingDisabled"
managed_origin_policy: "AllViewerExceptHostHeader"
managed_response_headers_policy: "CORS-with-preflight-and-SecurityHeadersPolicy"
lb_ip_whitelist:
- name: public
cidr: 0.0.0.0/0
lb_idle_timeout: '60'
global_accelerator:
production: true
health_check_path: '/check'
health_check_grace_period: '0'
serve_from_subdirectory: "/test-subdir"
domain_names:
staging:
- example-domain-name.co.uk
lb_ssl_certificate:
staging: 'arn:aws:acm:lb-region-0:000000000000:certificate/00000000-0000-0000-0000-000000000000'
cloudfront_ssl_certificate:
staging: 'arn:aws:acm:us-east-1:000000000000:certificate/00000000-0000-0000-0000-000000000000'
image_source: build_from_github_repo
image_location: git@github.com:dxw/dalmatian-test-app
custom_codestar_connection_arn: "arn:aws:codestar-connections:eu-west-2:000000000000:connection/00000000-0000-0000-0000-000000000000"
buildspec: 'buildspec.yml'
container_port: 3100
container_command: ["/docker-entrypoint.sh", "rails", "server"]
container_volumes:
- name: test-volume
host_path: /mnt/test
container_path: /test
container_extra_hosts:
- hostname: "example.com"
ipAddress: "127.0.0.1"
scheduled_tasks:
- name: old-scheduled-task
command: ["rake", "do:cron"]
schedule_expression: "cron(0 4 * * ? *)"
- name: test-scheduled-task
command: ["rake", "do:something"]
schedule_expression:
staging: "cron(0 12 * * ? *)"
production: "cron(1 2 * * ? *)"
workers:
- name: test-worker
command: ["bundle", "exec", sidekiq]
proxy_configuration:
staging:
https_proxy: "dalmatian_tinyproxy"
http_proxy: "http://my.test-proxy.com:8888"
no_proxy:
- "*.example.com"
- "93.184.216.34/32"
home_directory: "/home/user"
shared_loadbalancer:
- name: test-lb-1
idle_timeout: '60'
global_accelerator:
production: true
staging: false
in_use_by:
- test-service
waf:
- name: test-1
action: "count"
aws_managed_rules:
- name: 'AWSManagedRulesSQLiRuleSet'
excluded_path_patterns:
- "/wp-admin/async-upload.php"
- name: 'AWSManagedRulesCommonRuleSet'
exclude_rules:
- 'SizeRestrictions_BODY'
associations:
shared_loadbalancers:
- "test-lb-1"
service_cloudfront:
- "test-service"
environments:
production:
instance_type: t2.medium
min_servers: 2
max_servers: 4
staging:
instance_type: t2.small
min_servers: 2
max_servers: 4
track_revision: feature/experiment
extra_ecs_clusters:
- name: "test"
subnets_name: "extra_private_subnets"
min_servers: "2"
max_servers: "4"
instance_type: "t3.small"
tinyproxy:
create: true
shared-new-cluster:
account_id: 123456789012
cluster:
create: true
environments:
production:
instance_type: t2.medium
min_servers: 2
max_servers: 10
staging:
instance_type: t2.small
min_servers: 2
max_servers: 10
existing-shared-cluster-staging:
dalmatian_config_source:
- git@github.com:dxw/funky-app-dalmatian-config
account_id: 123456789012
cluster:
create: false
name: shared-cluster
environments:
staging:
example_var: foo
existing-shared-cluster-production:
dalmatian_config_source:
- git@github.com:dxw/neat-app-dalmatian-config
account_id: 123456789012
cluster:
create: false
name: shared-cluster
environments:
production:
example_var: bar

View File

@@ -0,0 +1,56 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 581-630 of 3542 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 630.
--- FILE CONTENT (truncated) ---
"staging" => 1
},
"engine" => "aurora-postgresql",
"engine_version" => "11.9",
"db_name" => "testapp",
"port" => 5432,
"force_ssl" => true,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"check_sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::your-destination-bucket-name",
"replication_kms_key_id" => "your-destination-kms-key-id"
}
)
)
)
end
let(:cluster1_validate_cmd_staging_aurora) do
terraform_validate_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.small",
"min_servers" => "2",
"max_servers" => "4",
"track_revision" => "feature/experiment",
"extra_ecs_clusters" => [
{
"name" => "test",
"subnets_name" => "extra_private_subnets",
"min_servers" => "2",
"max_servers" => "4",
"instance_type" => "t3.small"
}
],
"tinyproxy" => {

View File

@@ -0,0 +1,22 @@
# 51. SQL backups to S3
Date: 2021-01-22
## Status
Accepted
## Context
We need to take SQL dumps as backups, so that in case of disaster, we can import an SQL backup.
We currently take RDS snapshots, but these can only be restored to an RDS - which can lengthen the time it takes to restore a database
## Decision
We will create a scheduled task for each RDS launched, that will take an SQL dump of each database created on it.
These will be stored in S3, with Standard-IA (infequent access) to reduce cost of storing the backups.
The backups will be deleted after 90 days.
## Consequences
If AWS does completely go down, these will be unavailable, so offsite backups should be kept aswell.

View File

@@ -0,0 +1,228 @@
require_relative "../support/spec_helper"
module Dalmatian
RSpec.describe AuroraTest do
let(:helper) { class_double(Helper, change_to: true, run!: true, git_checkout: true) }
let(:logger) { class_double(Logger, info: true) }
let(:terraform) do
class_double(
Terraform,
fmt: true,
init: true,
validate: true,
ensure_presence_of_workspace: true
)
end
let(:aurora) do
instance_double(
Aurora,
identifier: "testaurora",
in_use_by: [
"test-service"
],
clusters_in_use: {
"production" => [
"test"
],
"staging" => [
"test"
]
},
minimum_size: {
"production" => 2,
"staging" => 1
},
maximum_size: {
"production" => 2,
"staging" => 1
},
engine: "aurora-postgresql",
engine_version: "11.9",
db_name: "testapp",
port: 5432,
maintenance_window: "mon:19:00-mon:19:30",
backup_window: "09:00-10:00",
backup_retention_period: 31,
force_ssl: true,
cluster_id: "new-dedicated-cluster-id",
account_id: 123456789012,
parameter_store_path_db_url_name: "DATABASE_URL",
sql_backup_scheduled_task_environment_variables: [
{
"name" => "foo",
"value" => "bar"
}
],
check_sql_backup_scheduled_task_environment_variables: [
{
"name" => "foo",
"value" => "bar"
}
],
sync_sql_backup_to_azure: false,
replication_bucket_destination_arn: "arn:aws:s3:::dest-bucket",
replication_kms_key_id: "key-id",
to_params: {
"identifier" => "testaurora",
"in_use_by" => ["test-service"],
"clusters_in_use" => {"production" => ["test"], "staging" => ["test"]},
"minimum_size" => {"production" => 2, "staging" => 1},
"maximum_size" => {"production" => 2, "staging" => 1},
"engine" => "aurora-postgresql",
"engine_version" => "11.9",
"db_name" => "testapp",
"port" => 5432,
"force_ssl" => true,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"check_sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::dest-bucket",
"replication_kms_key_id" => "key-id"
}
)
end
let!(:aurora_test) do
AuroraTest.new(
aurora: aurora,
env: {name: "staging", details: {"min_servers" => 2, "max_servers" => 4}},
helper: helper,
logger: logger,
terraform: terraform
)
end
describe "#call" do
before do
allow(Terraform).to receive(:init)
allow(Terraform).to receive(:ensure_presence_of_workspace)
allow(logger).to receive(:info)
end
let(:workspace_name) { "new-dedicated-cluster-id-testaurora-aurora-staging" }
before { aurora_test.call }
it "changes to the aurora infrastructure directory" do
directory = File.join(
Infrastructure::APP_ROOT,
Infrastructure::PATH,
"aurora"
)
expect(helper).to have_received(:change_to).with(directory)
end
it "logs our intention to run Terraform init" do
expect(logger).to have_received(:info).with(
"Running terraform init for #{workspace_name}"
)
end
it "runs Terraform init, with upgrade option" do
expect(terraform).to have_received(:init).with(upgrade: true)
end
it "ensures presence of workspace" do
expect(terraform).to have_received(:ensure_presence_of_workspace)
.with(workspace_name)
end
it "logs our intention to run Terraform fmt" do
expect(logger).to have_received(:info).with(
"Running terraform fmt for #{workspace_name}"
)
end
it "runs Terraform fmt with check and diff options" do
expect(terraform).to have_received(:fmt).with("-check -diff")
end
it "logs our intention to run Terraform validate" do
expect(logger).to have_received(:info).with(
"Running terraform validate for #{workspace_name}"
)
end
it "runs Terraform validate, with upgrade option" do
env_config = {
"min_servers" => 2,
"max_servers" => 4,
"aurora" => {
"identifier" => "testaurora",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [
"test"
],
"staging" => [
"test"
]
},
"minimum_size" => {
"production" => 2,
"staging" => 1
},
"maximum_size" => {
"production" => 2,
"staging" => 1
},
"engine" => "aurora-postgresql",
"engine_version" => "11.9",
"db_name" => "testapp",
"port" => 5432,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"force_ssl" => true,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"check_sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::dest-bucket",
"replication_kms_key_id" => "key-id"
},
"account_id" => 123456789012,
"infrastructure_name" => "new-dedicated-cluster-id",
"environment" => "staging",
"dalmatian_role" => "dalmatian-read"
}
expect(terraform).to have_received(:validate).with(
tfvars: env_config
)
end
it "changes back to the app root directory" do
expect(helper).to have_received(:change_to).with(Infrastructure::APP_ROOT)
end
end
end
end

View File

@@ -0,0 +1,228 @@
require_relative "../support/spec_helper"
module Dalmatian
RSpec.describe AuroraTest do
let(:helper) { class_double(Helper, change_to: true, run!: true, git_checkout: true) }
let(:logger) { class_double(Logger, info: true) }
let(:terraform) do
class_double(
Terraform,
fmt: true,
init: true,
validate: true,
ensure_presence_of_workspace: true
)
end
let(:aurora) do
instance_double(
Aurora,
identifier: "testaurora",
in_use_by: [
"test-service"
],
clusters_in_use: {
"production" => [
"test"
],
"staging" => [
"test"
]
},
minimum_size: {
"production" => 2,
"staging" => 1
},
maximum_size: {
"production" => 2,
"staging" => 1
},
engine: "aurora-postgresql",
engine_version: "11.9",
db_name: "testapp",
port: 5432,
maintenance_window: "mon:19:00-mon:19:30",
backup_window: "09:00-10:00",
backup_retention_period: 31,
force_ssl: true,
cluster_id: "new-dedicated-cluster-id",
account_id: 123456789012,
parameter_store_path_db_url_name: "DATABASE_URL",
sql_backup_scheduled_task_environment_variables: [
{
"name" => "foo",
"value" => "bar"
}
],
check_sql_backup_scheduled_task_environment_variables: [
{
"name" => "foo",
"value" => "bar"
}
],
sync_sql_backup_to_azure: false,
replication_bucket_destination_arn: "arn:aws:s3:::dest-bucket",
replication_kms_key_id: "key-id",
to_params: {
"identifier" => "testaurora",
"in_use_by" => ["test-service"],
"clusters_in_use" => {"production" => ["test"], "staging" => ["test"]},
"minimum_size" => {"production" => 2, "staging" => 1},
"maximum_size" => {"production" => 2, "staging" => 1},
"engine" => "aurora-postgresql",
"engine_version" => "11.9",
"db_name" => "testapp",
"port" => 5432,
"force_ssl" => true,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"check_sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::dest-bucket",
"replication_kms_key_id" => "key-id"
}
)
end
let!(:aurora_test) do
AuroraTest.new(
aurora: aurora,
env: {name: "staging", details: {"min_servers" => 2, "max_servers" => 4}},
helper: helper,
logger: logger,
terraform: terraform
)
end
describe "#call" do
before do
allow(Terraform).to receive(:init)
allow(Terraform).to receive(:ensure_presence_of_workspace)
allow(logger).to receive(:info)
end
let(:workspace_name) { "new-dedicated-cluster-id-testaurora-aurora-staging" }
before { aurora_test.call }
it "changes to the aurora infrastructure directory" do
directory = File.join(
Infrastructure::APP_ROOT,
Infrastructure::PATH,
"aurora"
)
expect(helper).to have_received(:change_to).with(directory)
end
it "logs our intention to run Terraform init" do
expect(logger).to have_received(:info).with(
"Running terraform init for #{workspace_name}"
)
end
it "runs Terraform init, with upgrade option" do
expect(terraform).to have_received(:init).with(upgrade: true)
end
it "ensures presence of workspace" do
expect(terraform).to have_received(:ensure_presence_of_workspace)
.with(workspace_name)
end
it "logs our intention to run Terraform fmt" do
expect(logger).to have_received(:info).with(
"Running terraform fmt for #{workspace_name}"
)
end
it "runs Terraform fmt with check and diff options" do
expect(terraform).to have_received(:fmt).with("-check -diff")
end
it "logs our intention to run Terraform validate" do
expect(logger).to have_received(:info).with(
"Running terraform validate for #{workspace_name}"
)
end
it "runs Terraform validate, with upgrade option" do
env_config = {
"min_servers" => 2,
"max_servers" => 4,
"aurora" => {
"identifier" => "testaurora",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [
"test"
],
"staging" => [
"test"
]
},
"minimum_size" => {
"production" => 2,
"staging" => 1
},
"maximum_size" => {
"production" => 2,
"staging" => 1
},
"engine" => "aurora-postgresql",
"engine_version" => "11.9",
"db_name" => "testapp",
"port" => 5432,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"force_ssl" => true,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"check_sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::dest-bucket",
"replication_kms_key_id" => "key-id"
},
"account_id" => 123456789012,
"infrastructure_name" => "new-dedicated-cluster-id",
"environment" => "staging",
"dalmatian_role" => "dalmatian-read"
}
expect(terraform).to have_received(:validate).with(
tfvars: env_config
)
end
it "changes back to the app root directory" do
expect(helper).to have_received(:change_to).with(Infrastructure::APP_ROOT)
end
end
end
end

View File

@@ -0,0 +1,56 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 181-230 of 231 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 230.
--- FILE CONTENT (truncated) ---
],
"clusters_in_use" => {
"production" => [
"test",
"default_dalmatian_ecs_cluster"
],
"staging" => [
"test",
"default_dalmatian_ecs_cluster"
]
},
"minimum_size" => {
"production" => 2,
"staging" => 1
},
"maximum_size" => {
"production" => 2,
"staging" => 1
},
"engine" => "aurora-postgresql",
"engine_version" => "11.9",
"db_name" => "testapp",
"port" => 5432,
"force_ssl" => true,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"check_sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::dest-bucket",
"replication_kms_key_id" => "key-id"
}
expect(aurora.to_params).to eq(expected_params)
end
end
end
end

View File

@@ -0,0 +1,26 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 581-600 of 3542 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 600.
--- FILE CONTENT (truncated) ---
"staging" => 1
},
"engine" => "aurora-postgresql",
"engine_version" => "11.9",
"db_name" => "testapp",
"port" => 5432,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"force_ssl" => true,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"check_sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",

View File

@@ -0,0 +1,26 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 446-465 of 1739 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 465.
--- FILE CONTENT (truncated) ---
terraform_apply_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.medium",
"min_servers" => "2",
"max_servers" => "4",
"environment" => "production",
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-admin",
"aurora" => {
"identifier" => "testaurora",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [
"test"
],
"staging" => [

View File

@@ -0,0 +1,154 @@
module Dalmatian
class Aurora
def initialize(cluster:, reference:)
@cluster = cluster
@reference = reference
end
attr_reader :cluster, :reference
def identifier
reference["identifier"]
end
def in_use_by
reference["in_use_by"] || []
end
def clusters_in_use
clusters_in_use = {}
cluster_environments.each do |env_name, _env_config|
clusters_in_use[env_name] = []
services.each do |service|
in_use_by.each do |service_name|
next if service.name != service_name
clusters_in_use[env_name] << if service.launch_on_cluster.empty?
"default_dalmatian_ecs_cluster"
else
service.launch_on_cluster
end
end
end
if in_use_by.empty?
clusters_in_use[env_name] << "default_dalmatian_ecs_cluster" ## fails SimpleCov
end
clusters_in_use[env_name] = clusters_in_use[env_name].uniq
end
clusters_in_use
end
def minimum_size
minimum_size = {}
reference["minimum_size"] = {} if reference["minimum_size"].nil?
cluster_environments.each do |env_name, _env_config|
minimum_size[env_name] = reference["minimum_size"][env_name] || ""
end
minimum_size
end
def maximum_size
maximum_size = {}
reference["maximum_size"] = {} if reference["maximum_size"].nil?
cluster_environments.each do |env_name, _env_config|
maximum_size[env_name] = reference["maximum_size"][env_name] || ""
end
maximum_size
end
def engine
reference["engine"]
end
def engine_version
reference["engine_version"]
end
def db_name
reference["db_name"]
end
def port
reference["port"] || ""
end
def maintenance_window
reference["maintenance_window"] || ""
end
def backup_window
reference["backup_window"] || ""
end
def backup_retention_period
reference["backup_retention_period"] || ""
end
def parameter_store_path_db_url_name
reference["parameter_store_path_db_url_name"] || ""
end
def sql_backup_scheduled_task_environment_variables
reference["sql_backup_scheduled_task_environment_variables"] || []
end
def check_sql_backup_scheduled_task_environment_variables
reference["check_sql_backup_scheduled_task_environment_variables"] || []
end
def sync_sql_backup_to_azure
reference["sync_sql_backup_to_azure"] || false
end
def replication_bucket_destination_arn
reference["replication_bucket_destination_arn"] || ""
end
def replication_kms_key_id
reference["replication_kms_key_id"] || ""
end
def force_ssl
reference["force_ssl"] || "false"
end
def cluster_id
cluster.id
end
def cluster_environments
cluster.environments
end
def account_id
cluster.account_id
end
def services
cluster.services
end
def to_params
{
"identifier" => identifier,
"in_use_by" => in_use_by,
"clusters_in_use" => clusters_in_use,
"minimum_size" => minimum_size,
"maximum_size" => maximum_size,
"engine" => engine,
"engine_version" => engine_version,
"db_name" => db_name,
"port" => port,
"force_ssl" => force_ssl,
"maintenance_window" => maintenance_window,
"backup_window" => backup_window,
"backup_retention_period" => backup_retention_period,
"parameter_store_path_db_url_name" => parameter_store_path_db_url_name,
"sql_backup_scheduled_task_environment_variables" => sql_backup_scheduled_task_environment_variables,
"check_sql_backup_scheduled_task_environment_variables" => check_sql_backup_scheduled_task_environment_variables,
"sync_sql_backup_to_azure" => sync_sql_backup_to_azure,
"replication_bucket_destination_arn" => replication_bucket_destination_arn,
"replication_kms_key_id" => replication_kms_key_id
}
end
end
end

View File

@@ -0,0 +1,106 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 551-650 of 3542 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 650.
--- FILE CONTENT (truncated) ---
let(:cluster1_validate_cmd_production_aurora) do
terraform_validate_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.medium",
"min_servers" => "2",
"max_servers" => "4",
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",
"environment" => "production",
"aurora" => {
"identifier" => "testaurora",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [
"test"
],
"staging" => [
"test"
]
},
"minimum_size" => {
"production" => 2,
"staging" => 1
},
"maximum_size" => {
"production" => 2,
"staging" => 1
},
"engine" => "aurora-postgresql",
"engine_version" => "11.9",
"db_name" => "testapp",
"port" => 5432,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"force_ssl" => true,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"check_sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::your-destination-bucket-name",
"replication_kms_key_id" => "your-destination-kms-key-id"
}
)
)
)
end
let(:cluster1_validate_cmd_staging_aurora) do
terraform_validate_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.small",
"min_servers" => "2",
"max_servers" => "4",
"track_revision" => "feature/experiment",
"extra_ecs_clusters" => [
{
"name" => "test",
"subnets_name" => "extra_private_subnets",
"min_servers" => "2",
"max_servers" => "4",
"instance_type" => "t3.small"
}
],
"tinyproxy" => {
"create" => true
},
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",
"environment" => "staging",
"aurora" => {
"identifier" => "testaurora",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [
"test"
],
"staging" => [
"test"
]
},
"minimum_size" => {

View File

@@ -0,0 +1,56 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 151-200 of 443 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 200.
--- FILE CONTENT (truncated) ---
aurora:
- identifier: testaurora
in_use_by:
- test-service
engine: 'aurora-postgresql'
engine_version: '11.9'
db_name: 'testapp'
port: 5432
maintenance_window: 'mon:19:00-mon:19:30'
backup_window: '09:00-10:00'
backup_retention_period: 31
force_ssl: true
minimum_size:
staging: 1
production: 2
maximum_size:
staging: 1
production: 2
parameter_store_path_db_url_name: 'DATABASE_URL'
sql_backup_scheduled_task_environment_variables:
- name: "foo"
value: "bar"
check_sql_backup_scheduled_task_environment_variables:
- name: "foo"
value: "bar"
replication_bucket_destination_arn: "arn:aws:s3:::your-destination-bucket-name"
replication_kms_key_id: "your-destination-kms-key-id"
elasticache_cluster:
- identifier: testredis
in_use_by:
- test-service
engine: 'redis'
node_type: 'cache.t2.micro'
node_count: 1
engine: 'redis'
engine_version: '5.0.6'
port: 6379
maintenance_window: 'mon:19:00-mon:22:00'
snapshot_window: '09:00-10:00'
parameter_store_path_elasticache_cluster_url_name: 'REDIS_URL'
opensearch_cluster:
- identifier: testos
in_use_by:
- test-service
version: '1.2'
master_enabled: true
master_count: '1'
master_type: 'c6g.large.search'
instance_count: '3'
instance_type: 't3.small.search'

View File

@@ -0,0 +1,106 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 551-650 of 3542 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 650.
--- FILE CONTENT (truncated) ---
let(:cluster1_validate_cmd_production_aurora) do
terraform_validate_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.medium",
"min_servers" => "2",
"max_servers" => "4",
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",
"environment" => "production",
"aurora" => {
"identifier" => "testaurora",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [
"test"
],
"staging" => [
"test"
]
},
"minimum_size" => {
"production" => 2,
"staging" => 1
},
"maximum_size" => {
"production" => 2,
"staging" => 1
},
"engine" => "aurora-postgresql",
"engine_version" => "11.9",
"db_name" => "testapp",
"port" => 5432,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"force_ssl" => true,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"check_sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::your-destination-bucket-name",
"replication_kms_key_id" => "your-destination-kms-key-id"
}
)
)
)
end
let(:cluster1_validate_cmd_staging_aurora) do
terraform_validate_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.small",
"min_servers" => "2",
"max_servers" => "4",
"track_revision" => "feature/experiment",
"extra_ecs_clusters" => [
{
"name" => "test",
"subnets_name" => "extra_private_subnets",
"min_servers" => "2",
"max_servers" => "4",
"instance_type" => "t3.small"
}
],
"tinyproxy" => {
"create" => true
},
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",
"environment" => "staging",
"aurora" => {
"identifier" => "testaurora",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [
"test"
],
"staging" => [
"test"
]
},
"minimum_size" => {

View File

@@ -0,0 +1,56 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 3301-3350 of 3542 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 3350.
--- FILE CONTENT (truncated) ---
"account_id" => "123456789012",
"infrastructure_name" => "existing-shared-cluster-production",
"dalmatian_role" => "dalmatian-read",
"environment" => "production"
)
)
)
end
let(:cluster4_plan_cmd_production) do
terraform_plan_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"example_var" => "bar",
"environment" => "production",
"cluster_name" => "shared-cluster-production",
"account_id" => "123456789012",
"infrastructure_name" => "existing-shared-cluster-production",
"dalmatian_role" => "dalmatian-read"
)
)
)
end
before do
allow(Logger).to receive(:info)
stub_git_clone_cmds
stub_git_checkout
stub_workspace_creation_cmds
allow(Helper).to receive(:run!).with("rake terrafile")
allow(Helper).to receive(:run!).with("terraform init -upgrade=true")
allow(Helper).to receive(:run!).with("terraform fmt -check -diff")
allow(Helper).to receive(:run!).with(cluster1_validate_cmd_hz)
allow(Helper).to receive(:run!).with(cluster1_validate_cmd_s3)
allow(Helper).to receive(:run!).with(cluster1_validate_cmd_vpn_cg)
allow(Helper).to receive(:run!).with(cluster1_plan_cmd_hz)
allow(Helper).to receive(:run!).with(cluster1_plan_cmd_s3)
allow(Helper).to receive(:run!).with(cluster1_plan_cmd_vpn_cg)
allow(Helper).to receive(:run!).with(cluster1_validate_cmd_production)
allow(Helper).to receive(:run!).with(cluster1_validate_cmd_staging)
allow(Helper).to receive(:run!).with(cluster1_plan_cmd_production)
allow(Helper).to receive(:run!).with(cluster1_plan_cmd_staging)
allow(Helper).to receive(:run!).with(source1_validate_cmd_production)
allow(Helper).to receive(:run!).with(source1_validate_cmd_staging)
allow(Helper).to receive(:run!).with(cluster1_validate_cmd_production_waf)
allow(Helper).to receive(:run!).with(cluster1_validate_cmd_staging_waf)

View File

@@ -0,0 +1,106 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 3401-3500 of 3542 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 3500.
--- FILE CONTENT (truncated) ---
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_hz)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_s3)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_vpn_cg)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production).twice
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging).twice
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_waf)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_waf)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_rds)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_rds)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_aurora)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_aurora)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_elasticache_cluster)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_elasticache_cluster)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_opensearch_cluster)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_opensearch_cluster)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_service)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_service)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_shared_loadbalancer)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_shared_loadbalancer)
expect(Helper).to have_received(:run!).with(cluster2_validate_cmd_production)
expect(Helper).to have_received(:run!).with(cluster2_validate_cmd_staging)
expect(Helper).to have_received(:run!).with(cluster3_validate_cmd_staging)
expect(Helper).to have_received(:run!).with(cluster4_validate_cmd_production)
end
it 'invokes "terraform validate" with the expected source options' do
expect(Helper).to have_received(:run!).with(source1_validate_cmd_production).twice
expect(Helper).to have_received(:run!).with(source1_validate_cmd_staging).twice
end
describe 'use of "terraform plan"' do
it 'passes the hosted zone config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_hz)
end
end
it 'passes the s3 config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_s3)
end
end
it 'passes the vpn customer gateway config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_vpn_cg)
end
end
it 'passes the base config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_production).twice
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_staging).twice
end
end
it 'passes the waf config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_production_waf)
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_staging_waf)
end
end
it 'passes the rds config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_production_rds)
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_staging_rds)
end
end
it 'passes the aurora config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_production_aurora)
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_staging_aurora)
end
end
it 'passes the elasticache config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!)
.with(cluster1_plan_cmd_production_elasticache_cluster)
expect(Helper).to have_received(:run!)
.with(cluster1_plan_cmd_staging_elasticache_cluster)
end
end
it 'passes the opensearch config to "terraform plan"' do

View File

@@ -0,0 +1,106 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 351-450 of 3542 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 450.
--- FILE CONTENT (truncated) ---
)
end
let(:cluster1_validate_cmd_production_rds) do
terraform_validate_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.medium",
"min_servers" => "2",
"max_servers" => "4",
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",
"environment" => "production",
"rds" => {
"identifier" => "testservice",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [
"test"
],
"staging" => [
"test"
]
},
"instance_class" => {
"production" => "db.t2.small",
"staging" => "db.t2.micro"
},
"engine" => "postgres",
"engine_version" => "11.4",
"allocated_storage" => 20,
"storage_encrypted" => true,
"storage_type" => "gp3",
"db_name" => "testapp",
"port" => 5432,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"force_ssl" => true,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"check_sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::your-destination-bucket-name",
"replication_kms_key_id" => "your-destination-kms-key-id",
"codebuild_access" => [
"service-name"
]
}
)
)
)
end
let(:cluster1_validate_cmd_staging_waf) do
terraform_validate_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.small",
"min_servers" => "2",
"max_servers" => "4",
"track_revision" => "feature/experiment",
"extra_ecs_clusters" => [
{
"name" => "test",
"subnets_name" => "extra_private_subnets",
"min_servers" => "2",
"max_servers" => "4",
"instance_type" => "t3.small"
}
],
"tinyproxy" => {
"create" => true
},
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",
"environment" => "staging",
"waf" => {
"name" => "test-1",
"action" => "count",
"ip_deny_list" => [],
"aws_managed_rules" => [
{
"name" => "AWSManagedRulesSQLiRuleSet",
"excluded_path_patterns" => [
"/wp-admin/async-upload.php"

View File

@@ -0,0 +1,106 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 3401-3500 of 3542 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 3500.
--- FILE CONTENT (truncated) ---
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_hz)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_s3)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_vpn_cg)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production).twice
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging).twice
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_waf)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_waf)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_rds)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_rds)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_aurora)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_aurora)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_elasticache_cluster)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_elasticache_cluster)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_opensearch_cluster)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_opensearch_cluster)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_service)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_service)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_shared_loadbalancer)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_shared_loadbalancer)
expect(Helper).to have_received(:run!).with(cluster2_validate_cmd_production)
expect(Helper).to have_received(:run!).with(cluster2_validate_cmd_staging)
expect(Helper).to have_received(:run!).with(cluster3_validate_cmd_staging)
expect(Helper).to have_received(:run!).with(cluster4_validate_cmd_production)
end
it 'invokes "terraform validate" with the expected source options' do
expect(Helper).to have_received(:run!).with(source1_validate_cmd_production).twice
expect(Helper).to have_received(:run!).with(source1_validate_cmd_staging).twice
end
describe 'use of "terraform plan"' do
it 'passes the hosted zone config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_hz)
end
end
it 'passes the s3 config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_s3)
end
end
it 'passes the vpn customer gateway config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_vpn_cg)
end
end
it 'passes the base config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_production).twice
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_staging).twice
end
end
it 'passes the waf config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_production_waf)
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_staging_waf)
end
end
it 'passes the rds config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_production_rds)
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_staging_rds)
end
end
it 'passes the aurora config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_production_aurora)
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_staging_aurora)
end
end
it 'passes the elasticache config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!)
.with(cluster1_plan_cmd_production_elasticache_cluster)
expect(Helper).to have_received(:run!)
.with(cluster1_plan_cmd_staging_elasticache_cluster)
end
end
it 'passes the opensearch config to "terraform plan"' do

View File

@@ -0,0 +1,56 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 401-450 of 1739 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 450.
--- FILE CONTENT (truncated) ---
"staging" => [
"test"
]
},
"instance_class" => {
"production" => "db.t2.small",
"staging" => "db.t2.micro"
},
"engine" => "postgres",
"engine_version" => "11.4",
"allocated_storage" => 20,
"storage_encrypted" => true,
"storage_type" => "gp3",
"db_name" => "testapp",
"port" => 5432,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"force_ssl" => true,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"check_sql_backup_scheduled_task_environment_variables" => [
{
"name" => "foo",
"value" => "bar"
}
],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::your-destination-bucket-name",
"replication_kms_key_id" => "your-destination-kms-key-id",
"codebuild_access" => [
"service-name"
]
}
)
)
)
end
let(:cluster1_apply_cmd_production_aurora) do
terraform_apply_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.medium",
"min_servers" => "2",

View File

@@ -0,0 +1,164 @@
module Dalmatian
class Rds
def initialize(cluster:, reference:)
@cluster = cluster
@reference = reference
end
attr_reader :cluster, :reference
def identifier
reference["identifier"]
end
def in_use_by
reference["in_use_by"] || []
end
def clusters_in_use
clusters_in_use = {}
cluster_environments.each do |env_name, _env_config|
clusters_in_use[env_name] = []
services.each do |service|
in_use_by.each do |service_name|
next if service.name != service_name
clusters_in_use[env_name] << if service.launch_on_cluster.empty?
"default_dalmatian_ecs_cluster"
else
service.launch_on_cluster
end
end
end
if in_use_by.empty?
clusters_in_use[env_name] << "default_dalmatian_ecs_cluster" ## fails SimpleCov
end
clusters_in_use[env_name] = clusters_in_use[env_name].uniq
end
clusters_in_use
end
def instance_class
instance_class = {}
reference["instance_class"] = {} if reference["instance_class"].nil?
cluster_environments.each do |env_name, _env_config|
instance_class[env_name] = reference["instance_class"][env_name] || ""
end
instance_class
end
def engine
reference["engine"]
end
def engine_version
reference["engine_version"]
end
def allocated_storage
reference["allocated_storage"] || ""
end
def storage_encrypted
reference["storage_encrypted"].nil? || reference["storage_encrypted"]
end
def storage_type
reference["storage_type"] || "gp3"
end
def db_name
reference["db_name"]
end
def port
reference["port"] || ""
end
def maintenance_window
reference["maintenance_window"] || ""
end
def backup_window
reference["backup_window"] || ""
end
def backup_retention_period
reference["backup_retention_period"] || ""
end
def force_ssl
reference["force_ssl"] || "false"
end
def parameter_store_path_db_url_name
reference["parameter_store_path_db_url_name"] || ""
end
def sql_backup_scheduled_task_environment_variables
reference["sql_backup_scheduled_task_environment_variables"] || []
end
def check_sql_backup_scheduled_task_environment_variables
reference["check_sql_backup_scheduled_task_environment_variables"] || []
end
def sync_sql_backup_to_azure
reference["sync_sql_backup_to_azure"] || false
end
def replication_bucket_destination_arn
reference["replication_bucket_destination_arn"] || ""
end
def replication_kms_key_id
reference["replication_kms_key_id"] || ""
end
def codebuild_access
reference["codebuild_access"] || []
end
def cluster_id
cluster.id
end
def cluster_environments
cluster.environments
end
def account_id
cluster.account_id
end
def services
cluster.services
end
def to_params
{
"identifier" => identifier,
"in_use_by" => in_use_by,
"clusters_in_use" => clusters_in_use,
"instance_class" => instance_class,
"engine" => engine,
"engine_version" => engine_version,
"allocated_storage" => allocated_storage,
"storage_encrypted" => storage_encrypted,
"storage_type" => storage_type,
"db_name" => db_name,
"port" => port,
"maintenance_window" => maintenance_window,
"backup_window" => backup_window,
"backup_retention_period" => backup_retention_period,
"force_ssl" => force_ssl,
"parameter_store_path_db_url_name" => parameter_store_path_db_url_name,
"sql_backup_scheduled_task_environment_variables" => sql_backup_scheduled_task_environment_variables,
"check_sql_backup_scheduled_task_environment_variables" => check_sql_backup_scheduled_task_environment_variables,
"sync_sql_backup_to_azure" => sync_sql_backup_to_azure,
"replication_bucket_destination_arn" => replication_bucket_destination_arn,
"replication_kms_key_id" => replication_kms_key_id,
"codebuild_access" => codebuild_access
}
end
end
end

View File

@@ -0,0 +1,105 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 3351-3450 of 3542 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 3450.
--- FILE CONTENT (truncated) ---
allow(Helper).to receive(:run!).with(cluster1_plan_cmd_production_waf)
allow(Helper).to receive(:run!).with(cluster1_plan_cmd_staging_waf)
allow(Helper).to receive(:run!).with(cluster1_validate_cmd_production_rds)
allow(Helper).to receive(:run!).with(cluster1_validate_cmd_staging_rds)
allow(Helper).to receive(:run!).with(cluster1_plan_cmd_production_rds)
allow(Helper).to receive(:run!).with(cluster1_plan_cmd_staging_rds)
allow(Helper).to receive(:run!).with(cluster1_validate_cmd_production_aurora)
allow(Helper).to receive(:run!).with(cluster1_validate_cmd_staging_aurora)
allow(Helper).to receive(:run!).with(cluster1_plan_cmd_production_aurora)
allow(Helper).to receive(:run!).with(cluster1_plan_cmd_staging_aurora)
allow(Helper).to receive(:run!).with(cluster1_validate_cmd_production_elasticache_cluster)
allow(Helper).to receive(:run!).with(cluster1_validate_cmd_staging_elasticache_cluster)
allow(Helper).to receive(:run!).with(cluster1_plan_cmd_production_elasticache_cluster)
allow(Helper).to receive(:run!).with(cluster1_plan_cmd_staging_elasticache_cluster)
allow(Helper).to receive(:run!).with(cluster1_validate_cmd_production_opensearch_cluster)
allow(Helper).to receive(:run!).with(cluster1_validate_cmd_staging_opensearch_cluster)
allow(Helper).to receive(:run!).with(cluster1_plan_cmd_production_opensearch_cluster)
allow(Helper).to receive(:run!).with(cluster1_plan_cmd_staging_opensearch_cluster)
allow(Helper).to receive(:run!).with(cluster1_validate_cmd_production_service)
allow(Helper).to receive(:run!).with(cluster1_validate_cmd_staging_service)
allow(Helper).to receive(:run!).with(cluster1_plan_cmd_production_service)
allow(Helper).to receive(:run!).with(cluster1_plan_cmd_staging_service)
allow(Helper).to receive(:run!).with(cluster1_validate_cmd_production_shared_loadbalancer)
allow(Helper).to receive(:run!).with(cluster1_validate_cmd_staging_shared_loadbalancer)
allow(Helper).to receive(:run!).with(cluster1_plan_cmd_production_shared_loadbalancer)
allow(Helper).to receive(:run!).with(cluster1_plan_cmd_staging_shared_loadbalancer)
allow(Helper).to receive(:run!).with(cluster2_validate_cmd_production)
allow(Helper).to receive(:run!).with(cluster2_validate_cmd_staging)
allow(Helper).to receive(:run!).with(cluster2_plan_cmd_production)
allow(Helper).to receive(:run!).with(cluster2_plan_cmd_staging)
allow(Helper).to receive(:run!).with(cluster3_validate_cmd_staging)
allow(Helper).to receive(:run!).with(cluster3_plan_cmd_staging)
allow(Helper).to receive(:run!).with(cluster4_validate_cmd_production)
allow(Helper).to receive(:run!).with(cluster4_plan_cmd_production)
Infrastructure.new(
ConfigurationReader.new(path: "./dalmatian.yml.example", parameter_store: fake_parameter_store).infrastructures
).test
end
it 'invokes "terraform validate" with the expected cluster options' do
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_hz)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_s3)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_vpn_cg)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production).twice
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging).twice
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_waf)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_waf)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_rds)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_rds)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_aurora)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_aurora)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_elasticache_cluster)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_elasticache_cluster)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_opensearch_cluster)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_opensearch_cluster)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_service)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_service)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_shared_loadbalancer)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_shared_loadbalancer)
expect(Helper).to have_received(:run!).with(cluster2_validate_cmd_production)
expect(Helper).to have_received(:run!).with(cluster2_validate_cmd_staging)
expect(Helper).to have_received(:run!).with(cluster3_validate_cmd_staging)
expect(Helper).to have_received(:run!).with(cluster4_validate_cmd_production)
end
it 'invokes "terraform validate" with the expected source options' do
expect(Helper).to have_received(:run!).with(source1_validate_cmd_production).twice
expect(Helper).to have_received(:run!).with(source1_validate_cmd_staging).twice
end
describe 'use of "terraform plan"' do
it 'passes the hosted zone config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_hz)
end
end

View File

@@ -0,0 +1,47 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 3501-3542 of 3542 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 3542.
--- FILE CONTENT (truncated) ---
aggregate_failures do
expect(Helper).to have_received(:run!)
.with(cluster1_plan_cmd_production_opensearch_cluster)
expect(Helper).to have_received(:run!)
.with(cluster1_plan_cmd_staging_opensearch_cluster)
end
end
it 'passes the services config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_production_service)
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_staging_service)
end
end
it 'passes the loadbalancer config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!)
.with(cluster1_plan_cmd_production_shared_loadbalancer)
expect(Helper).to have_received(:run!)
.with(cluster1_plan_cmd_staging_shared_loadbalancer)
end
end
it 'passes the cluster 2 config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster2_plan_cmd_production)
expect(Helper).to have_received(:run!).with(cluster2_plan_cmd_staging)
end
end
it 'passes the cluster 3 config to "terraform plan"' do
expect(Helper).to have_received(:run!).with(cluster3_plan_cmd_staging)
end
it 'passes the cluster 4 config to "terraform plan"' do
expect(Helper).to have_received(:run!).with(cluster4_plan_cmd_production)
end
end
end
end

View File

@@ -0,0 +1,106 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 3401-3500 of 3542 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 3500.
--- FILE CONTENT (truncated) ---
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_hz)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_s3)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_vpn_cg)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production).twice
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging).twice
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_waf)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_waf)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_rds)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_rds)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_aurora)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_aurora)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_elasticache_cluster)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_elasticache_cluster)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_opensearch_cluster)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_opensearch_cluster)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_service)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_service)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_production_shared_loadbalancer)
expect(Helper).to have_received(:run!).with(cluster1_validate_cmd_staging_shared_loadbalancer)
expect(Helper).to have_received(:run!).with(cluster2_validate_cmd_production)
expect(Helper).to have_received(:run!).with(cluster2_validate_cmd_staging)
expect(Helper).to have_received(:run!).with(cluster3_validate_cmd_staging)
expect(Helper).to have_received(:run!).with(cluster4_validate_cmd_production)
end
it 'invokes "terraform validate" with the expected source options' do
expect(Helper).to have_received(:run!).with(source1_validate_cmd_production).twice
expect(Helper).to have_received(:run!).with(source1_validate_cmd_staging).twice
end
describe 'use of "terraform plan"' do
it 'passes the hosted zone config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_hz)
end
end
it 'passes the s3 config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_s3)
end
end
it 'passes the vpn customer gateway config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_vpn_cg)
end
end
it 'passes the base config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_production).twice
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_staging).twice
end
end
it 'passes the waf config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_production_waf)
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_staging_waf)
end
end
it 'passes the rds config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_production_rds)
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_staging_rds)
end
end
it 'passes the aurora config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_production_aurora)
expect(Helper).to have_received(:run!).with(cluster1_plan_cmd_staging_aurora)
end
end
it 'passes the elasticache config to "terraform plan"' do
aggregate_failures do
expect(Helper).to have_received(:run!)
.with(cluster1_plan_cmd_production_elasticache_cluster)
expect(Helper).to have_received(:run!)
.with(cluster1_plan_cmd_staging_elasticache_cluster)
end
end
it 'passes the opensearch config to "terraform plan"' do

View File

@@ -0,0 +1,149 @@
module Dalmatian
class Aurora
def initialize(cluster:, reference:)
@cluster = cluster
@reference = reference
end
attr_reader :cluster, :reference
def identifier
reference["identifier"]
end
def in_use_by
reference["in_use_by"] || []
end
def clusters_in_use
clusters_in_use = {}
cluster_environments.each do |env_name, _env_config|
clusters_in_use[env_name] = []
services.each do |service|
in_use_by.each do |service_name|
next if service.name != service_name
clusters_in_use[env_name] << if service.launch_on_cluster.empty?
"default_dalmatian_ecs_cluster"
else
service.launch_on_cluster
end
end
end
if in_use_by.empty?
clusters_in_use[env_name] << "default_dalmatian_ecs_cluster" ## fails SimpleCov
end
clusters_in_use[env_name] = clusters_in_use[env_name].uniq
end
clusters_in_use
end
def minimum_size
minimum_size = {}
reference["minimum_size"] = {} if reference["minimum_size"].nil?
cluster_environments.each do |env_name, _env_config|
minimum_size[env_name] = reference["minimum_size"][env_name] || ""
end
minimum_size
end
def maximum_size
maximum_size = {}
reference["maximum_size"] = {} if reference["maximum_size"].nil?
cluster_environments.each do |env_name, _env_config|
maximum_size[env_name] = reference["maximum_size"][env_name] || ""
end
maximum_size
end
def engine
reference["engine"]
end
def engine_version
reference["engine_version"]
end
def db_name
reference["db_name"]
end
def port
reference["port"] || ""
end
def maintenance_window
reference["maintenance_window"] || ""
end
def backup_window
reference["backup_window"] || ""
end
def backup_retention_period
reference["backup_retention_period"] || ""
end
def parameter_store_path_db_url_name
reference["parameter_store_path_db_url_name"] || ""
end
def sql_backup_scheduled_task_environment_variables
reference["sql_backup_scheduled_task_environment_variables"] || []
end
def sync_sql_backup_to_azure
reference["sync_sql_backup_to_azure"] || false
end
def replication_bucket_destination_arn
reference["replication_bucket_destination_arn"] || ""
end
def replication_kms_key_id
reference["replication_kms_key_id"] || ""
end
def force_ssl
reference["force_ssl"] || "false"
end
def cluster_id
cluster.id
end
def cluster_environments
cluster.environments
end
def account_id
cluster.account_id
end
def services
cluster.services
end
def to_params
{
"identifier" => identifier,
"in_use_by" => in_use_by,
"clusters_in_use" => clusters_in_use,
"minimum_size" => minimum_size,
"maximum_size" => maximum_size,
"engine" => engine,
"engine_version" => engine_version,
"db_name" => db_name,
"port" => port,
"force_ssl" => force_ssl,
"maintenance_window" => maintenance_window,
"backup_window" => backup_window,
"backup_retention_period" => backup_retention_period,
"parameter_store_path_db_url_name" => parameter_store_path_db_url_name,
"sql_backup_scheduled_task_environment_variables" => sql_backup_scheduled_task_environment_variables,
"sync_sql_backup_to_azure" => sync_sql_backup_to_azure,
"replication_bucket_destination_arn" => replication_bucket_destination_arn,
"replication_kms_key_id" => replication_kms_key_id
}
end
end
end

View File

@@ -0,0 +1,202 @@
require_relative "../support/spec_helper"
module Dalmatian
RSpec.describe Aurora do
let(:aurora_reference) do
{
"identifier" => "testaurora",
"in_use_by" => [
"test-service"
],
"minimum_size" => {
"production" => 2,
"staging" => 1
},
"maximum_size" => {
"production" => 2,
"staging" => 1
},
"engine" => "aurora-postgresql",
"engine_version" => "11.9",
"db_name" => "testapp",
"port" => 5432,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"force_ssl" => true,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::dest-bucket",
"replication_kms_key_id" => "key-id"
}
end
let(:cluster) do
instance_double(
Cluster,
id: "new-dedicated-cluster",
name: "new-dedicated-cluster",
environments: {"staging" => {}, "production" => {}},
services: [double(name: "test-service", domain_names: {"staging" => ["example-domain-name.co.uk"]}, launch_on_cluster: "test"),
double(name: "test-service", domain_names: {"staging" => ["example-domain-name.co.uk"]}, launch_on_cluster: "")],
account_id: 123456789012,
auroras: [double(reference: {identifier: "someotheraurora"}),
double(reference: aurora_reference)]
)
end
let(:aurora) { Aurora.new(cluster: cluster, reference: aurora_reference) }
describe "#identifier" do
it "uses aurora identifier" do
expect(aurora.identifier).to eq("testaurora")
end
end
describe "#in_use_by" do
it "uses aurora in_use_by list" do
expect(aurora.in_use_by).to eq(["test-service"])
end
end
describe "#clusters_in_use" do
it "uses aurora clusters_in_use list" do
expect(aurora.clusters_in_use).to eq({"staging" => ["test", "default_dalmatian_ecs_cluster"], "production" => ["test", "default_dalmatian_ecs_cluster"]})
end
end
describe "#minimum_size" do
it "uses aurora minimum_size" do
expect(aurora.minimum_size).to eq({"production" => 2, "staging" => 1})
end
end
describe "#maximum_size" do
it "uses aurora maximum_size" do
expect(aurora.maximum_size).to eq({"production" => 2, "staging" => 1})
end
end
describe "#engine" do
it "uses aurora engine" do
expect(aurora.engine).to eq("aurora-postgresql")
end
end
describe "#engine_version" do
it "uses the aurora engine_version" do
expect(aurora.engine_version).to eq("11.9")
end
end
describe "#db_name" do
it "uses the aurora db_name" do
expect(aurora.db_name).to eq("testapp")
end
end
describe "#port" do
it "uses the aurora port" do
expect(aurora.port).to eq(5432)
end
end
describe "#maintenance_window" do
it "uses the aurora maintenance_window" do
expect(aurora.maintenance_window).to eq("mon:19:00-mon:19:30")
end
end
describe "#backup_window" do
it "uses the aurora backup_window" do
expect(aurora.backup_window).to eq("09:00-10:00")
end
end
describe "#backup_retention_period" do
it "uses the aurora backup_retention_period" do
expect(aurora.backup_retention_period).to eq(31)
end
end
describe "#force_ssl" do
it "uses the aurora force_ssl bool" do
expect(aurora.force_ssl).to eq(true)
end
end
describe "#parameter_store_path_db_url_name" do
it "uses the aurora parameter_store_path_db_url_name" do
expect(aurora.parameter_store_path_db_url_name).to eq("DATABASE_URL")
end
end
describe "#sql_backup_scheduled_task_environment_variables" do
it "uses the aurora sql_backup_scheduled_task_environment_variables" do
expect(aurora.sql_backup_scheduled_task_environment_variables).to eq([])
end
end
describe "#sync_sql_backup_to_azure" do
it "will have offsite backups disabled by default" do
expect(aurora.sync_sql_backup_to_azure).to eq(false)
end
end
describe "#replication_bucket_destination_arn" do
it "uses the aurora replication_bucket_destination_arn" do
expect(aurora.replication_bucket_destination_arn).to eq("arn:aws:s3:::dest-bucket")
end
end
describe "#replication_kms_key_id" do
it "uses the aurora replication_kms_key_id" do
expect(aurora.replication_kms_key_id).to eq("key-id")
end
end
describe "#to_params" do
it "provides a hash of attributes for use in deployment" do
expected_params = {
"identifier" => "testaurora",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [
"test",
"default_dalmatian_ecs_cluster"
],
"staging" => [
"test",
"default_dalmatian_ecs_cluster"
]
},
"minimum_size" => {
"production" => 2,
"staging" => 1
},
"maximum_size" => {
"production" => 2,
"staging" => 1
},
"engine" => "aurora-postgresql",
"engine_version" => "11.9",
"db_name" => "testapp",
"port" => 5432,
"force_ssl" => true,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::dest-bucket",
"replication_kms_key_id" => "key-id"
}
expect(aurora.to_params).to eq(expected_params)
end
end
end
end

View File

@@ -0,0 +1,106 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 501-600 of 1719 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 600.
--- FILE CONTENT (truncated) ---
"instance_type" => "t2.small",
"min_servers" => "2",
"max_servers" => "4",
"track_revision" => "feature/experiment",
"extra_ecs_clusters" => [
{
"name" => "test",
"subnets_name" => "extra_private_subnets",
"min_servers" => "2",
"max_servers" => "4",
"instance_type" => "t3.small"
}
],
"tinyproxy" => {
"create" => true
},
"environment" => "staging",
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-admin",
"aurora" => {
"identifier" => "testaurora",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [
"test"
],
"staging" => [
"test"
]
},
"minimum_size" => {
"production" => 2,
"staging" => 1
},
"maximum_size" => {
"production" => 2,
"staging" => 1
},
"engine" => "aurora-postgresql",
"engine_version" => "11.9",
"db_name" => "testapp",
"port" => 5432,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"force_ssl" => true,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [],
"sql_backup_scheduled_task_environment_variables" => [],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::your-destination-bucket-name",
"replication_kms_key_id" => "your-destination-kms-key-id"
}
)
)
)
end
let(:cluster1_apply_cmd_production_elasticache_cluster) do
terraform_apply_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.medium",
"min_servers" => "2",
"max_servers" => "4",
"environment" => "production",
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-admin",
"elasticache_cluster" => {
"identifier" => "testredis",
"in_use_by" => [
"test-service"
],
"node_type" => "cache.t2.micro",
"node_count" => 1,
"engine" => "redis",
"engine_version" => "5.0.6",
"parameters" => [],
"port" => 6379,
"maintenance_window" => "mon:19:00-mon:22:00",
"snapshot_window" => "09:00-10:00",
"parameter_store_path_elasticache_cluster_url_name" => "REDIS_URL"
}
)
)
)
end
let(:cluster1_apply_cmd_staging_elasticache_cluster) do
terraform_apply_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.small",
"min_servers" => "2",
"max_servers" => "4",
"track_revision" => "feature/experiment",

View File

@@ -0,0 +1,106 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 551-650 of 3502 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 650.
--- FILE CONTENT (truncated) ---
let(:cluster1_validate_cmd_production_aurora) do
terraform_validate_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.medium",
"min_servers" => "2",
"max_servers" => "4",
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",
"environment" => "production",
"aurora" => {
"identifier" => "testaurora",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [
"test"
],
"staging" => [
"test"
]
},
"minimum_size" => {
"production" => 2,
"staging" => 1
},
"maximum_size" => {
"production" => 2,
"staging" => 1
},
"engine" => "aurora-postgresql",
"engine_version" => "11.9",
"db_name" => "testapp",
"port" => 5432,
"maintenance_window" => "mon:19:00-mon:19:30",
"backup_window" => "09:00-10:00",
"backup_retention_period" => 31,
"force_ssl" => true,
"parameter_store_path_db_url_name" => "DATABASE_URL",
"sql_backup_scheduled_task_environment_variables" => [],
"sql_backup_scheduled_task_environment_variables" => [],
"sync_sql_backup_to_azure" => false,
"replication_bucket_destination_arn" => "arn:aws:s3:::your-destination-bucket-name",
"replication_kms_key_id" => "your-destination-kms-key-id"
}
)
)
)
end
let(:cluster1_validate_cmd_staging_aurora) do
terraform_validate_cmd(
tfvar_arg_string(
infrastructure_defaults.merge(
"instance_type" => "t2.small",
"min_servers" => "2",
"max_servers" => "4",
"track_revision" => "feature/experiment",
"extra_ecs_clusters" => [
{
"name" => "test",
"subnets_name" => "extra_private_subnets",
"min_servers" => "2",
"max_servers" => "4",
"instance_type" => "t3.small"
}
],
"tinyproxy" => {
"create" => true
},
"account_id" => "123456789012",
"infrastructure_name" => "new-dedicated-cluster",
"dalmatian_role" => "dalmatian-read",
"environment" => "staging",
"aurora" => {
"identifier" => "testaurora",
"in_use_by" => [
"test-service"
],
"clusters_in_use" => {
"production" => [
"test"
],
"staging" => [
"test"
]
},
"minimum_size" => {
"production" => 2,
"staging" => 1
},
"maximum_size" => {
"production" => 2,
"staging" => 1
},
"engine" => "aurora-postgresql",
"engine_version" => "11.9",
"db_name" => "testapp",

View File

@@ -0,0 +1,106 @@
IMPORTANT: The file content has been truncated.
Status: Showing lines 131-230 of 437 total lines.
Action: To read more of the file, you can use the 'offset' and 'limit' parameters in a subsequent 'read_file' call. For example, to read the next section of the file, use offset: 230.
--- FILE CONTENT (truncated) ---
allocated_storage: 20
storage_encrypted: true
storage_type: 'gp3'
db_name: 'testapp'
port: 5432
maintenance_window: 'mon:19:00-mon:19:30'
backup_window: '09:00-10:00'
backup_retention_period: 31
force_ssl: true
parameter_store_path_db_url_name: 'DATABASE_URL'
sql_backup_scheduled_task_environment_variables:
- name: "foo"
value: "bar"
check_sql_backup_scheduled_task_environment_variables:
- name: "foo"
value: "bar"
replication_bucket_destination_arn: "arn:aws:s3:::your-destination-bucket-name"
replication_kms_key_id: "your-destination-kms-key-id"
codebuild_access:
- service-name
aurora:
- identifier: testaurora
in_use_by:
- test-service
engine: 'aurora-postgresql'
engine_version: '11.9'
db_name: 'testapp'
port: 5432
maintenance_window: 'mon:19:00-mon:19:30'
backup_window: '09:00-10:00'
backup_retention_period: 31
force_ssl: true
minimum_size:
staging: 1
production: 2
maximum_size:
staging: 1
production: 2
parameter_store_path_db_url_name: 'DATABASE_URL'
replication_bucket_destination_arn: "arn:aws:s3:::your-destination-bucket-name"
replication_kms_key_id: "your-destination-kms-key-id"
elasticache_cluster:
- identifier: testredis
in_use_by:
- test-service
engine: 'redis'
node_type: 'cache.t2.micro'
node_count: 1
engine: 'redis'
engine_version: '5.0.6'
port: 6379
maintenance_window: 'mon:19:00-mon:22:00'
snapshot_window: '09:00-10:00'
parameter_store_path_elasticache_cluster_url_name: 'REDIS_URL'
opensearch_cluster:
- identifier: testos
in_use_by:
- test-service
version: '1.2'
master_enabled: true
master_count: '1'
master_type: 'c6g.large.search'
instance_count: '3'
instance_type: 't3.small.search'
warm_enabled: true
warm_count: '2'
warm_type: 'ultrawarm1.medium.search'
volume_size: '20'
parameter_store_path_opensearch_cluster_url_name: 'ELASTICSEARCH_URL'
services:
- name: test-service
blue_green:
production:
enabled: true
db_copy:
from_db_host_ps_key: /test-app/other-test-service/production/DB_HOST
from_db_name_ps_key: /test-app/other-test-service/production/DB_NAME
from_db_user_ps_key: /test-app/other-test-service/production/DB_USER
from_db_pass_ps_key: /test-app/other-test-service/production/DB_PASSWORD
blue_green_db_name_ps_key: /test-app/test-service/production/DB_HOST
blue_green_db_user_ps_key: /test-app/test-service/production/DB_USER
blue_green_db_pass_ps_key: /test-app/test-service/production/DB_PASSWORD
sql_backups_s3_bucket: new-dedicated-cluster-testservice-production-sql-backup
db_rewrites:
- from: other-test-service.example.com
to: test-service.example.com
directory_copy:
- from: /mnt/efs/other-test-service-media
to: /mnt/efs/test-service-media
chown: "33:33"
asset_copy_trigger_ps_key: /test-app/test-service/production/BLUE_GREEN_ASSET_COPY_TRIGGER
staging:
enabled: false
launch_on:
- production
- staging
launch_on_cluster: "test"
monitoring:
production:
opsgenie_alerts:

Some files were not shown because too many files have changed in this diff Show More