Add files via upload

This commit is contained in:
jlevine18 2019-01-06 13:02:35 -06:00 committed by GitHub
parent 752b981e37
commit d7301e26c3
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
78 changed files with 63595 additions and 0 deletions

View File

@ -0,0 +1,19 @@
# The names of individuals who have contributed to this project.
#
# Names are formatted as:
# name <email>
#
Ali Ijaz Sheikh <ofrobots@google.com>
Austin Peterson <austin@akpwebdesign.com>
Dave Gramlich <callmehiphop@gmail.com>
Eric Uldall <ericuldall@gmail.com>
Ernest Landrito <landrito@google.com>
Jason Dobry <jdobry@google.com>
Justin King <jcking@mtu.edu>
Karolis Narkevicius <karolis.n@gmail.com>
Kelvin Jin <kelvinjin@google.com>
Luke Sneeringer <lukesneeringer@google.com>
Matthew Loring <matthewloring@users.noreply.github.com>
Michael Prentice <splaktar@gmail.com>
Stephen Sawchuk <sawchuk@gmail.com>
Tim Swast <swast@google.com>

View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,79 @@
<img src="https://avatars2.githubusercontent.com/u/2810941?v=3&s=96" alt="Google Cloud Platform logo" title="Google Cloud Platform" align="right" height="96" width="96"/>
# [Google Cloud Common Module: Node.js Client](https://github.com/googlecloudplatform/google-cloud-node)
[![release level](https://img.shields.io/badge/release%20level-alpha-orange.svg?style&#x3D;flat)](https://cloud.google.com/terms/launch-stages)
[![CircleCI](https://img.shields.io/circleci/project/github/googleapis/nodejs-common.svg?style=flat)](https://circleci.com/gh/googleapis/nodejs-common)
[![AppVeyor](https://ci.appveyor.com/api/projects/status/github/googleapis/nodejs-common?branch=master&svg=true)](https://ci.appveyor.com/project/googleapis/nodejs-common)
[![codecov](https://img.shields.io/codecov/c/github/googleapis/nodejs-common/master.svg?style=flat)](https://codecov.io/gh/googleapis/nodejs-common)
> Node.js Common package
Google Cloud Common node.js module contains stuff used by other Cloud API modules.
* [github.com/googlecloudplatform/google-cloud-node](https://github.com/googlecloudplatform/google-cloud-node)
Read more about the client libraries for Cloud APIs, including the older
Google APIs Client Libraries, in [Client Libraries Explained][explained].
[explained]: https://cloud.google.com/apis/docs/client-libraries-explained
**Table of contents:**
* [Quickstart](#quickstart)
* [Before you begin](#before-you-begin)
* [Installing the client library](#installing-the-client-library)
* [Using the client library](#using-the-client-library)
* [Versioning](#versioning)
* [Contributing](#contributing)
* [License](#license)
## Quickstart
### Before you begin
1. Select or create a Cloud Platform project.
[Go to the projects page][projects]
1. Enable billing for your project.
[Enable billing][billing]
1. [Set up authentication with a service account][auth] so you can access the
API from your local workstation.
[projects]: https://console.cloud.google.com/project
[billing]: https://support.google.com/cloud/answer/6293499#enable-billing
[auth]: https://cloud.google.com/docs/authentication/getting-started
### Installing the package
npm install --save @google-cloud/common
It's unlikely you will need to install this package directly, as it will be
installed as a dependency when you install other `@google-cloud` packages.
## Versioning
This library follows [Semantic Versioning](http://semver.org/).
This library is considered to be in **alpha**. This means it is still a
work-in-progress and under active development. Any release is subject to
backwards-incompatible changes at any time.
More Information: [Google Cloud Platform Launch Stages][launch_stages]
[launch_stages]: https://cloud.google.com/terms/launch-stages
## Contributing
Contributions welcome! See the [Contributing Guide](https://github.com/googlecloudplatform/google-cloud-node/blob/master/.github/CONTRIBUTING.md).
## License
Apache Version 2.0
See [LICENSE](https://github.com/googlecloudplatform/google-cloud-node/blob/master/LICENSE)
[shell_img]: http://gstatic.com/cloudssh/images/open-btn.png

View File

@ -0,0 +1,22 @@
declare module 'retry-request' {
import * as request from 'request';
namespace retryRequest {
function getNextRetryDelay(retryNumber: number): void;
interface Options {
objectMode?: boolean,
request?: typeof request,
retries?: number,
noResponseRetries?: number,
currentRetryAttempt?: number,
shouldRetryFn?: (response: request.RequestResponse) => boolean
}
}
function retryRequest(requestOpts: request.Options, opts: retryRequest.Options, callback?: request.RequestCallback)
: { abort: () => void };
function retryRequest(requestOpts: request.Options, callback?: request.RequestCallback)
: { abort: () => void };
export = retryRequest;
}

View File

@ -0,0 +1,209 @@
'use strict';
var request = require('request');
var through = require('through2');
var DEFAULTS = {
objectMode: false,
request: request,
retries: 2,
noResponseRetries: 2,
currentRetryAttempt: 0,
shouldRetryFn: function (response) {
var retryRanges = [
// https://en.wikipedia.org/wiki/List_of_HTTP_status_codes
// 1xx - Retry (Informational, request still processing)
// 2xx - Do not retry (Success)
// 3xx - Do not retry (Redirect)
// 4xx - Do not retry (Client errors)
// 429 - Retry ("Too Many Requests")
// 5xx - Retry (Server errors)
[100, 199],
[429, 429],
[500, 599]
];
var statusCode = response.statusCode;
var range;
while ((range = retryRanges.shift())) {
if (statusCode >= range[0] && statusCode <= range[1]) {
// Not a successful status or redirect.
return true;
}
}
}
};
function retryRequest(requestOpts, opts, callback) {
var streamMode = typeof arguments[arguments.length - 1] !== 'function';
if (typeof opts === 'function') {
callback = opts;
}
opts = opts || DEFAULTS;
if (typeof opts.objectMode === 'undefined') {
opts.objectMode = DEFAULTS.objectMode;
}
if (typeof opts.request === 'undefined') {
opts.request = DEFAULTS.request;
}
if (typeof opts.retries !== 'number') {
opts.retries = DEFAULTS.retries;
}
if (typeof opts.currentRetryAttempt !== 'number') {
opts.currentRetryAttempt = DEFAULTS.currentRetryAttempt;
}
if (typeof opts.noResponseRetries !== 'number') {
opts.noResponseRetries = DEFAULTS.noResponseRetries;
}
if (typeof opts.shouldRetryFn !== 'function') {
opts.shouldRetryFn = DEFAULTS.shouldRetryFn;
}
var currentRetryAttempt = opts.currentRetryAttempt;
var numNoResponseAttempts = 0;
var streamResponseHandled = false;
var retryStream;
var requestStream;
var delayStream;
var activeRequest;
var retryRequest = {
abort: function () {
if (activeRequest && activeRequest.abort) {
activeRequest.abort();
}
}
};
if (streamMode) {
retryStream = through({ objectMode: opts.objectMode });
retryStream.abort = resetStreams;
}
if (currentRetryAttempt > 0) {
retryAfterDelay(currentRetryAttempt);
} else {
makeRequest();
}
if (streamMode) {
return retryStream;
} else {
return retryRequest;
}
function resetStreams() {
delayStream = null;
if (requestStream) {
requestStream.abort && requestStream.abort();
requestStream.cancel && requestStream.cancel();
if (requestStream.destroy) {
requestStream.destroy();
} else if (requestStream.end) {
requestStream.end();
}
}
}
function makeRequest() {
currentRetryAttempt++;
if (streamMode) {
streamResponseHandled = false;
delayStream = through({ objectMode: opts.objectMode });
requestStream = opts.request(requestOpts);
setImmediate(function () {
retryStream.emit('request');
});
requestStream
// gRPC via google-cloud-node can emit an `error` as well as a `response`
// Whichever it emits, we run with-- we can't run with both. That's what
// is up with the `streamResponseHandled` tracking.
.on('error', function (err) {
if (streamResponseHandled) {
return;
}
streamResponseHandled = true;
onResponse(err);
})
.on('response', function (resp, body) {
if (streamResponseHandled) {
return;
}
streamResponseHandled = true;
onResponse(null, resp, body);
})
.on('complete', retryStream.emit.bind(retryStream, 'complete'));
requestStream.pipe(delayStream);
} else {
activeRequest = opts.request(requestOpts, onResponse);
}
}
function retryAfterDelay(currentRetryAttempt) {
if (streamMode) {
resetStreams();
}
setTimeout(makeRequest, getNextRetryDelay(currentRetryAttempt));
}
function onResponse(err, response, body) {
// An error such as DNS resolution.
if (err) {
numNoResponseAttempts++;
if (numNoResponseAttempts <= opts.noResponseRetries) {
retryAfterDelay(numNoResponseAttempts);
} else {
if (streamMode) {
retryStream.emit('error', err);
retryStream.end();
} else {
callback(err, response, body);
}
}
return;
}
// Send the response to see if we should try again.
if (currentRetryAttempt <= opts.retries && opts.shouldRetryFn(response)) {
retryAfterDelay(currentRetryAttempt);
return;
}
// No more attempts need to be made, just continue on.
if (streamMode) {
retryStream.emit('response', response);
delayStream.pipe(retryStream);
requestStream.on('error', function (err) {
retryStream.destroy(err);
});
} else {
callback(err, response, body);
}
}
}
module.exports = retryRequest;
function getNextRetryDelay(retryNumber) {
return (Math.pow(2, retryNumber) * 1000) + Math.floor(Math.random() * 1000);
}
module.exports.getNextRetryDelay = getNextRetryDelay;

View File

@ -0,0 +1,20 @@
The MIT License (MIT)
Copyright (c) 2015 Stephen Sawchuk
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@ -0,0 +1,70 @@
{
"_from": "retry-request@^3.0.0",
"_id": "retry-request@3.3.2",
"_inBundle": false,
"_integrity": "sha512-WIiGp37XXDC6e7ku3LFoi7LCL/Gs9luGeeqvbPRb+Zl6OQMw4RCRfSaW+aLfE6lhz1R941UavE6Svl3Dm5xGIQ==",
"_location": "/@google-cloud/common/retry-request",
"_phantomChildren": {},
"_requested": {
"type": "range",
"registry": true,
"raw": "retry-request@^3.0.0",
"name": "retry-request",
"escapedName": "retry-request",
"rawSpec": "^3.0.0",
"saveSpec": null,
"fetchSpec": "^3.0.0"
},
"_requiredBy": [
"/@google-cloud/common"
],
"_resolved": "https://registry.npmjs.org/retry-request/-/retry-request-3.3.2.tgz",
"_shasum": "fd8e0079e7b0dfc7056e500b6f089437db0da4df",
"_spec": "retry-request@^3.0.0",
"_where": "C:\\Users\\jlevi\\Downloads\\tr2022-strategy-master\\tr2022-strategy-master\\data analysis\\functions\\node_modules\\@google-cloud\\common",
"author": {
"name": "Stephen Sawchuk",
"email": "sawchuk@gmail.com"
},
"bugs": {
"url": "https://github.com/stephenplusplus/retry-request/issues"
},
"bundleDependencies": false,
"dependencies": {
"request": "^2.81.0",
"through2": "^2.0.0"
},
"deprecated": false,
"description": "Retry a request.",
"devDependencies": {
"async": "^2.5.0",
"lodash.range": "^3.2.0",
"mocha": "^2.2.5"
},
"engines": {
"node": ">=4"
},
"files": [
"index.js",
"index.d.ts",
"license"
],
"homepage": "https://github.com/stephenplusplus/retry-request#readme",
"keywords": [
"request",
"retry",
"stream"
],
"license": "MIT",
"main": "index.js",
"name": "retry-request",
"repository": {
"type": "git",
"url": "git+https://github.com/stephenplusplus/retry-request.git"
},
"scripts": {
"test": "mocha --timeout 0"
},
"types": "index.d.ts",
"version": "3.3.2"
}

View File

@ -0,0 +1,152 @@
|![retry-request](logo.png)
|:-:
|Retry a [request][request] with built-in [exponential backoff](https://developers.google.com/analytics/devguides/reporting/core/v3/coreErrors#backoff).
```sh
$ npm install --save retry-request
```
```js
var request = require('retry-request');
```
It should work the same as `request` in both callback mode and stream mode.
Note: This module only works when used as a readable stream, i.e. POST requests aren't supported ([#3](https://github.com/stephenplusplus/retry-request/issues/3)).
#### Callback
`urlThatReturns503` will be requested 3 total times before giving up and executing the callback.
```js
request(urlThatReturns503, function (err, resp, body) {});
```
#### Stream
`urlThatReturns503` will be requested 3 total times before giving up and emitting the `response` and `complete` event as usual.
```js
request(urlThatReturns503)
.on('error', function () {})
.on('response', function () {})
.on('complete', function () {});
```
## request(requestOptions, [opts], [cb])
### requestOptions
Passed directly to `request`. See the list of options supported: https://github.com/request/request/#requestoptions-callback.
### opts *(optional)*
#### `opts.noResponseRetries`
Type: `Number`
Default: `2`
The number of times to retry after a response fails to come through, such as a DNS resolution error or a socket hangup.
```js
var opts = {
noResponseRetries: 0
};
request(url, opts, function (err, resp, body) {
// url was requested 1 time before giving up and
// executing this callback.
});
```
#### `opts.objectMode`
Type: `Boolean`
Default: `false`
Set to `true` if your custom `opts.request` function returns a stream in object mode.
#### `opts.retries`
Type: `Number`
Default: `2`
```js
var opts = {
retries: 4
};
request(urlThatReturns503, opts, function (err, resp, body) {
// urlThatReturns503 was requested a total of 5 times
// before giving up and executing this callback.
});
```
#### `opts.currentRetryAttempt`
Type: `Number`
Default: `0`
```js
var opts = {
currentRetryAttempt: 1
};
request(urlThatReturns503, opts, function (err, resp, body) {
// urlThatReturns503 was requested as if it already failed once.
});
```
#### `opts.shouldRetryFn`
Type: `Function`
Default: Returns `true` if [http.incomingMessage](https://nodejs.org/api/http.html#http_http_incomingmessage).statusCode is < 200 or >= 400.
```js
var opts = {
shouldRetryFn: function (incomingHttpMessage) {
return incomingHttpMessage.statusMessage !== 'OK';
}
};
request(urlThatReturnsNonOKStatusMessage, opts, function (err, resp, body) {
// urlThatReturnsNonOKStatusMessage was requested a
// total of 3 times, each time using `opts.shouldRetryFn`
// to decide if it should continue before giving up and
// executing this callback.
});
```
#### `opts.request`
Type: `Function`
Default: [`request`][request]
*NOTE: If you override the request function, and it returns a stream in object mode, be sure to set `opts.objectMode` to `true`.*
```js
var originalRequest = require('request').defaults({
pool: {
maxSockets: Infinity
}
});
var opts = {
request: originalRequest
};
request(urlThatReturns503, opts, function (err, resp, body) {
// Your provided `originalRequest` instance was used.
});
```
### cb *(optional)*
Passed directly to `request`. See the callback section: https://github.com/request/request/#requestoptions-callback.
[request]: https://github.com/request/request

View File

@ -0,0 +1,166 @@
{
"_from": "@google-cloud/common@^0.17.0",
"_id": "@google-cloud/common@0.17.0",
"_inBundle": false,
"_integrity": "sha512-HRZLSU762E6HaKoGfJGa8W95yRjb9rY7LePhjaHK9ILAnFacMuUGVamDbTHu1csZomm1g3tZTtXfX/aAhtie/Q==",
"_location": "/@google-cloud/common",
"_phantomChildren": {
"request": "2.88.0",
"through2": "2.0.5"
},
"_requested": {
"type": "range",
"registry": true,
"raw": "@google-cloud/common@^0.17.0",
"name": "@google-cloud/common",
"escapedName": "@google-cloud%2fcommon",
"scope": "@google-cloud",
"rawSpec": "^0.17.0",
"saveSpec": null,
"fetchSpec": "^0.17.0"
},
"_requiredBy": [
"/@google-cloud/storage"
],
"_resolved": "https://registry.npmjs.org/@google-cloud/common/-/common-0.17.0.tgz",
"_shasum": "8ef558750db481fc10a13757a49479ab9a1c8c07",
"_spec": "@google-cloud/common@^0.17.0",
"_where": "C:\\Users\\jlevi\\Downloads\\tr2022-strategy-master\\tr2022-strategy-master\\data analysis\\functions\\node_modules\\@google-cloud\\storage",
"author": {
"name": "Google Inc."
},
"bugs": {
"url": "https://github.com/googleapis/nodejs-common/issues"
},
"bundleDependencies": false,
"contributors": [
{
"name": "Ali Ijaz Sheikh",
"email": "ofrobots@google.com"
},
{
"name": "Austin Peterson",
"email": "austin@akpwebdesign.com"
},
{
"name": "Dave Gramlich",
"email": "callmehiphop@gmail.com"
},
{
"name": "Eric Uldall",
"email": "ericuldall@gmail.com"
},
{
"name": "Ernest Landrito",
"email": "landrito@google.com"
},
{
"name": "Jason Dobry",
"email": "jdobry@google.com"
},
{
"name": "Justin King",
"email": "jcking@mtu.edu"
},
{
"name": "Karolis Narkevicius",
"email": "karolis.n@gmail.com"
},
{
"name": "Kelvin Jin",
"email": "kelvinjin@google.com"
},
{
"name": "Luke Sneeringer",
"email": "lukesneeringer@google.com"
},
{
"name": "Matthew Loring",
"email": "matthewloring@users.noreply.github.com"
},
{
"name": "Michael Prentice",
"email": "splaktar@gmail.com"
},
{
"name": "Stephen Sawchuk",
"email": "sawchuk@gmail.com"
},
{
"name": "Tim Swast",
"email": "swast@google.com"
}
],
"dependencies": {
"array-uniq": "^1.0.3",
"arrify": "^1.0.1",
"concat-stream": "^1.6.0",
"create-error-class": "^3.0.2",
"duplexify": "^3.5.0",
"ent": "^2.2.0",
"extend": "^3.0.1",
"google-auto-auth": "^0.10.0",
"is": "^3.2.0",
"log-driver": "1.2.7",
"methmeth": "^1.1.0",
"modelo": "^4.2.0",
"request": "^2.79.0",
"retry-request": "^3.0.0",
"split-array-stream": "^1.0.0",
"stream-events": "^1.0.1",
"string-format-obj": "^1.1.0",
"through2": "^2.0.3"
},
"deprecated": false,
"description": "Common components for Cloud APIs Node.js Client Libraries",
"devDependencies": {
"@google-cloud/nodejs-repo-tools": "^2.1.1",
"async": "^2.6.0",
"codecov": "^3.0.0",
"eslint": "^4.10.0",
"eslint-config-prettier": "^2.7.0",
"eslint-plugin-node": "^6.0.0",
"eslint-plugin-prettier": "^2.3.1",
"ink-docstrap": "^1.3.0",
"intelli-espower-loader": "^1.0.1",
"js-green-licenses": "^0.5.0",
"jsdoc": "^3.5.5",
"mocha": "^5.0.0",
"nyc": "^11.3.0",
"power-assert": "^1.4.4",
"prettier": "^1.11.1",
"proxyquire": "^2.0.0",
"uuid": "^3.0.1"
},
"engines": {
"node": ">=4.0.0"
},
"files": [
"src",
"AUTHORS",
"CONTRIBUTORS",
"LICENSE"
],
"homepage": "https://github.com/googleapis/nodejs-common#readme",
"license": "Apache-2.0",
"main": "./src/index.js",
"name": "@google-cloud/common",
"repository": {
"type": "git",
"url": "git+https://github.com/googleapis/nodejs-common.git"
},
"scripts": {
"cover": "nyc --reporter=lcov mocha --require intelli-espower-loader test/*.js && nyc report",
"docs": "repo-tools exec -- jsdoc -c .jsdoc.js",
"generate-scaffolding": "repo-tools generate all",
"license-check": "jsgl --local .",
"lint": "repo-tools lint --cmd eslint -- src/ samples/ system-test/ test/",
"posttest": "npm run license-check",
"prettier": "repo-tools exec -- prettier --write src/*.js src/*/*.js samples/*.js samples/*/*.js test/*.js test/*/*.js system-test/*.js system-test/*/*.js",
"publish-module": "node ../../scripts/publish.js common",
"samples-test": "cd samples/ && npm link ../ && npm test && cd ../",
"test": "repo-tools test run --cmd npm -- run cover",
"test-no-cover": "repo-tools test run --cmd mocha -- test/*.js --no-timeouts"
},
"version": "0.17.0"
}

View File

@ -0,0 +1,51 @@
/*!
* Copyright 2016 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* @type {module:common/logger}
* @private
*/
exports.logger = require('./logger.js');
/**
* @type {module:common/operation}
* @private
*/
exports.Operation = require('./operation.js');
/**
* @type {module:common/paginator}
* @private
*/
exports.paginator = require('./paginator.js');
/**
* @type {module:common/service}
* @private
*/
exports.Service = require('./service.js');
/**
* @type {module:common/serviceObject}
* @private
*/
exports.ServiceObject = require('./service-object.js');
/**
* @type {module:common/util}
* @private
*/
exports.util = require('./util.js');

View File

@ -0,0 +1,71 @@
/*!
* Copyright 2016 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
'use strict';
/*!
* @module common/logger
*/
const format = require('string-format-obj');
const is = require('is');
const logDriver = require('log-driver');
/**
* The default list of log levels.
* @type {string[]}
*/
const LEVELS = ['silent', 'error', 'warn', 'info', 'debug', 'silly'];
/**
* Create a logger to print output to the console.
*
* @param {string=|object=} options - Configuration object. If a string, it is
* treated as `options.level`.
* @param {string=} options.level - The minimum log level that will print to the
* console. (Default: `error`)
* @param {Array.<string>=} options.levels - The list of levels to use. (Default:
* logger.LEVELS)
* @param {string=} options.tag - A tag to use in log messages.
*/
function logger(options) {
if (is.string(options)) {
options = {
level: options,
};
}
options = options || {};
return logDriver({
levels: options.levels || LEVELS,
level: options.level || 'error',
format: function() {
const args = [].slice.call(arguments);
return format('{level}{tag} {message}', {
level: args.shift().toUpperCase(),
tag: options.tag ? ':' + options.tag + ':' : '',
message: args.join(' '),
});
},
});
}
module.exports = logger;
module.exports.LEVELS = LEVELS;

View File

@ -0,0 +1,193 @@
/*!
* Copyright 2016 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*!
* @module common/operation
*/
'use strict';
const events = require('events');
const extend = require('extend');
const modelo = require('modelo');
/**
* @type {module:common/serviceObject}
* @private
*/
const ServiceObject = require('./service-object.js');
// jscs:disable maximumLineLength
/**
* An Operation object allows you to interact with APIs that take longer to
* process things.
*
* @constructor
* @alias module:common/operation
*
* @param {object} config - Configuration object.
* @param {module:common/service|module:common/serviceObject|module:common/grpcService|module:common/grpcServiceObject} config.parent - The
* parent object.
* @param {string} id - The operation ID.
*/
// jscs:enable maximumLineLength
function Operation(config) {
const methods = {
/**
* Checks to see if an operation exists.
*/
exists: true,
/**
* Retrieves the operation.
*/
get: true,
/**
* Retrieves metadata for the operation.
*/
getMetadata: {
reqOpts: {
name: config.id,
},
},
};
config = extend(
{
baseUrl: '',
},
config
);
config.methods = config.methods || methods;
ServiceObject.call(this, config);
events.EventEmitter.call(this);
this.completeListeners = 0;
this.hasActiveListeners = false;
this.listenForEvents_();
}
modelo.inherits(Operation, ServiceObject, events.EventEmitter);
/**
* Wraps the `complete` and `error` events in a Promise.
*
* @return {promise}
*/
Operation.prototype.promise = function() {
const self = this;
return new self.Promise(function(resolve, reject) {
self.on('error', reject).on('complete', function(metadata) {
resolve([metadata]);
});
});
};
/**
* Begin listening for events on the operation. This method keeps track of how
* many "complete" listeners are registered and removed, making sure polling is
* handled automatically.
*
* As long as there is one active "complete" listener, the connection is open.
* When there are no more listeners, the polling stops.
*
* @private
*/
Operation.prototype.listenForEvents_ = function() {
const self = this;
this.on('newListener', function(event) {
if (event === 'complete') {
self.completeListeners++;
if (!self.hasActiveListeners) {
self.hasActiveListeners = true;
self.startPolling_();
}
}
});
this.on('removeListener', function(event) {
if (event === 'complete' && --self.completeListeners === 0) {
self.hasActiveListeners = false;
}
});
};
/**
* Poll for a status update. Execute the callback:
*
* - callback(err): Operation failed
* - callback(): Operation incomplete
* - callback(null, metadata): Operation complete
*
* @private
*
* @param {function} callback
*/
Operation.prototype.poll_ = function(callback) {
this.getMetadata(function(err, resp) {
if (err || resp.error) {
callback(err || resp.error);
return;
}
if (!resp.done) {
callback();
return;
}
callback(null, resp);
});
};
/**
* Poll `getMetadata` to check the operation's status. This runs a loop to ping
* the API on an interval.
*
* Note: This method is automatically called once a "complete" event handler is
* registered on the operation.
*
* @private
*/
Operation.prototype.startPolling_ = function() {
const self = this;
if (!this.hasActiveListeners) {
return;
}
this.poll_(function(err, metadata) {
if (err) {
self.emit('error', err);
return;
}
if (!metadata) {
setTimeout(self.startPolling_.bind(self), 500);
return;
}
self.emit('complete', metadata);
});
};
module.exports = Operation;

View File

@ -0,0 +1,267 @@
/*!
* Copyright 2015 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*!
* @module common/paginator
*/
'use strict';
const arrify = require('arrify');
const concat = require('concat-stream');
const extend = require('extend');
const is = require('is');
const split = require('split-array-stream');
/**
* @type {module:common/util}
* @private
*/
const util = require('./util.js');
/*! Developer Documentation
*
* paginator is used to auto-paginate `nextQuery` methods as well as
* streamifying them.
*
* Before:
*
* search.query('done=true', function(err, results, nextQuery) {
* search.query(nextQuery, function(err, results, nextQuery) {});
* });
*
* After:
*
* search.query('done=true', function(err, results) {});
*
* Methods to extend should be written to accept callbacks and return a
* `nextQuery`.
*/
const paginator = {};
/**
* Cache the original method, then overwrite it on the Class's prototype.
*
* @param {function} Class - The parent class of the methods to extend.
* @param {string|string[]} methodNames - Name(s) of the methods to extend.
*/
paginator.extend = function(Class, methodNames) {
methodNames = arrify(methodNames);
methodNames.forEach(function(methodName) {
const originalMethod = Class.prototype[methodName];
// map the original method to a private member
Class.prototype[methodName + '_'] = originalMethod;
// overwrite the original to auto-paginate
Class.prototype[methodName] = function() {
const parsedArguments = paginator.parseArguments_(arguments);
return paginator.run_(parsedArguments, originalMethod.bind(this));
};
});
};
/**
* Wraps paginated API calls in a readable object stream.
*
* This method simply calls the nextQuery recursively, emitting results to a
* stream. The stream ends when `nextQuery` is null.
*
* `maxResults` will act as a cap for how many results are fetched and emitted
* to the stream.
*
* @param {string} methodName - Name of the method to streamify.
* @return {function} - Wrapped function.
*/
paginator.streamify = function(methodName) {
return function() {
const parsedArguments = paginator.parseArguments_(arguments);
const originalMethod = this[methodName + '_'] || this[methodName];
return paginator.runAsStream_(parsedArguments, originalMethod.bind(this));
};
};
/**
* Parse a pseudo-array `arguments` for a query and callback.
*
* @param {array} args - The original `arguments` pseduo-array that the original
* method received.
*/
paginator.parseArguments_ = function(args) {
let query;
let autoPaginate = true;
let maxApiCalls = -1;
let maxResults = -1;
let callback;
const firstArgument = args[0];
const lastArgument = args[args.length - 1];
if (is.fn(firstArgument)) {
callback = firstArgument;
} else {
query = firstArgument;
}
if (is.fn(lastArgument)) {
callback = lastArgument;
}
if (is.object(query)) {
query = extend(true, {}, query);
// Check if the user only asked for a certain amount of results.
if (is.number(query.maxResults)) {
// `maxResults` is used API-wide.
maxResults = query.maxResults;
} else if (is.number(query.pageSize)) {
// `pageSize` is Pub/Sub's `maxResults`.
maxResults = query.pageSize;
}
if (is.number(query.maxApiCalls)) {
maxApiCalls = query.maxApiCalls;
delete query.maxApiCalls;
}
if (
callback &&
(maxResults !== -1 || // The user specified a limit.
query.autoPaginate === false)
) {
autoPaginate = false;
}
}
const parsedArguments = {
query: query || {},
autoPaginate: autoPaginate,
maxApiCalls: maxApiCalls,
maxResults: maxResults,
callback: callback,
};
parsedArguments.streamOptions = extend(true, {}, parsedArguments.query);
delete parsedArguments.streamOptions.autoPaginate;
delete parsedArguments.streamOptions.maxResults;
delete parsedArguments.streamOptions.pageSize;
return parsedArguments;
};
/**
* This simply checks to see if `autoPaginate` is set or not, if it's true
* then we buffer all results, otherwise simply call the original method.
*
* @param {array} parsedArguments - Parsed arguments from the original method
* call.
* @param {object=|string=} parsedArguments.query - Query object. This is most
* commonly an object, but to make the API more simple, it can also be a
* string in some places.
* @param {function=} parsedArguments.callback - Callback function.
* @param {boolean} parsedArguments.autoPaginate - Auto-pagination enabled.
* @param {boolean} parsedArguments.maxApiCalls - Maximum API calls to make.
* @param {number} parsedArguments.maxResults - Maximum results to return.
* @param {function} originalMethod - The cached method that accepts a callback
* and returns `nextQuery` to receive more results.
*/
paginator.run_ = function(parsedArguments, originalMethod) {
const query = parsedArguments.query;
const callback = parsedArguments.callback;
const autoPaginate = parsedArguments.autoPaginate;
if (autoPaginate) {
this.runAsStream_(parsedArguments, originalMethod)
.on('error', callback)
.pipe(
concat(function(results) {
callback(null, results);
})
);
} else {
originalMethod(query, callback);
}
};
/**
* This method simply calls the nextQuery recursively, emitting results to a
* stream. The stream ends when `nextQuery` is null.
*
* `maxResults` will act as a cap for how many results are fetched and emitted
* to the stream.
*
* @param {object=|string=} parsedArguments.query - Query object. This is most
* commonly an object, but to make the API more simple, it can also be a
* string in some places.
* @param {function=} parsedArguments.callback - Callback function.
* @param {boolean} parsedArguments.autoPaginate - Auto-pagination enabled.
* @param {boolean} parsedArguments.maxApiCalls - Maximum API calls to make.
* @param {number} parsedArguments.maxResults - Maximum results to return.
* @param {function} originalMethod - The cached method that accepts a callback
* and returns `nextQuery` to receive more results.
* @return {stream} - Readable object stream.
*/
paginator.runAsStream_ = function(parsedArguments, originalMethod) {
let query = parsedArguments.query;
let resultsToSend = parsedArguments.maxResults;
const limiter = util.createLimiter(makeRequest, {
maxApiCalls: parsedArguments.maxApiCalls,
streamOptions: parsedArguments.streamOptions,
});
const stream = limiter.stream;
stream.once('reading', function() {
makeRequest(query);
});
function makeRequest(query) {
originalMethod(query, onResultSet);
}
function onResultSet(err, results, nextQuery) {
if (err) {
stream.destroy(err);
return;
}
if (resultsToSend >= 0 && results.length > resultsToSend) {
results = results.splice(0, resultsToSend);
}
resultsToSend -= results.length;
split(results, stream, function(streamEnded) {
if (streamEnded) {
return;
}
if (nextQuery && resultsToSend !== 0) {
limiter.makeRequest(nextQuery);
return;
}
stream.push(null);
});
}
return limiter.stream;
};
module.exports = paginator;

View File

@ -0,0 +1,392 @@
/*!
* Copyright 2015 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*!
* @module common/service-object
*/
'use strict';
const arrify = require('arrify');
const exec = require('methmeth');
const extend = require('extend');
const is = require('is');
/**
* @type {module:common/util}
* @private
*/
const util = require('./util.js');
/**
* ServiceObject is a base class, meant to be inherited from by a "service
* object," like a BigQuery dataset or Storage bucket.
*
* Most of the time, these objects share common functionality; they can be
* created or deleted, and you can get or set their metadata.
*
* By inheriting from this class, a service object will be extended with these
* shared behaviors. Note that any method can be overridden when the service
* object requires specific behavior.
*
* @constructor
* @alias module:common/service-object
*
* @private
*
* @param {object} config - Configuration object.
* @param {string} config.baseUrl - The base URL to make API requests to.
* @param {string} config.createMethod - The method which creates this object.
* @param {string=} config.id - The identifier of the object. For example, the
* name of a Storage bucket or Pub/Sub topic.
* @param {object=} config.methods - A map of each method name that should be
* inherited.
* @param {object} config.methods[].reqOpts - Default request options for this
* particular method. A common use case is when `setMetadata` requires a
* `PUT` method to override the default `PATCH`.
* @param {object} config.parent - The parent service instance. For example, an
* instance of Storage if the object is Bucket.
*/
function ServiceObject(config) {
const self = this;
util.privatize(this, 'metadata', {});
util.privatize(this, 'baseUrl', config.baseUrl);
util.privatize(this, 'parent', config.parent); // Parent class.
util.privatize(this, 'id', config.id); // Name or ID (e.g. dataset ID, bucket name, etc.)
util.privatize(this, 'createMethod', config.createMethod);
util.privatize(this, 'methods', config.methods || {});
util.privatize(this, 'interceptors', []);
util.privatize(this, 'Promise', this.parent.Promise);
if (config.methods) {
const allMethodNames = Object.keys(ServiceObject.prototype);
allMethodNames
.filter(function(methodName) {
return (
// All ServiceObjects need `request`.
!/^request/.test(methodName) &&
// The ServiceObject didn't redefine the method.
self[methodName] === ServiceObject.prototype[methodName] &&
// This method isn't wanted.
!config.methods[methodName]
);
})
.forEach(function(methodName) {
self[methodName] = undefined;
});
}
}
/**
* Create the object.
*
* @param {object=} options - Configuration object.
* @param {function} callback - The callback function.
* @param {?error} callback.err - An error returned while making this request.
* @param {object} callback.instance - The instance.
* @param {object} callback.apiResponse - The full API response.
*/
ServiceObject.prototype.create = function(options, callback) {
const self = this;
const args = [this.id];
if (is.fn(options)) {
callback = options;
}
if (is.object(options)) {
args.push(options);
}
// Wrap the callback to return *this* instance of the object, not the newly-
// created one.
function onCreate(err, instance) {
const args = [].slice.call(arguments);
if (!err) {
self.metadata = instance.metadata;
args[1] = self; // replace the created `instance` with this one.
}
callback.apply(null, args);
}
args.push(onCreate);
this.createMethod.apply(null, args);
};
/**
* Delete the object.
*
* @param {function=} callback - The callback function.
* @param {?error} callback.err - An error returned while making this request.
* @param {object} callback.apiResponse - The full API response.
*/
ServiceObject.prototype.delete = function(callback) {
const methodConfig = this.methods.delete || {};
callback = callback || util.noop;
const reqOpts = extend(
{
method: 'DELETE',
uri: '',
},
methodConfig.reqOpts
);
// The `request` method may have been overridden to hold any special behavior.
// Ensure we call the original `request` method.
ServiceObject.prototype.request.call(this, reqOpts, function(err, resp) {
callback(err, resp);
});
};
/**
* Check if the object exists.
*
* @param {function} callback - The callback function.
* @param {?error} callback.err - An error returned while making this request.
* @param {boolean} callback.exists - Whether the object exists or not.
*/
ServiceObject.prototype.exists = function(callback) {
this.get(function(err) {
if (err) {
if (err.code === 404) {
callback(null, false);
} else {
callback(err);
}
return;
}
callback(null, true);
});
};
/**
* Get the object if it exists. Optionally have the object created if an options
* object is provided with `autoCreate: true`.
*
* @param {object=} config - The configuration object that will be used to
* create the object if necessary.
* @param {boolean} config.autoCreate - Create the object if it doesn't already
* exist.
* @param {function} callback - The callback function.
* @param {?error} callback.err - An error returned while making this request.
* @param {object} callback.instance - The instance.
* @param {object} callback.apiResponse - The full API response.
*/
ServiceObject.prototype.get = function(config, callback) {
const self = this;
if (is.fn(config)) {
callback = config;
config = {};
}
config = config || {};
const autoCreate = config.autoCreate && is.fn(this.create);
delete config.autoCreate;
function onCreate(err, instance, apiResponse) {
if (err) {
if (err.code === 409) {
self.get(config, callback);
return;
}
callback(err, null, apiResponse);
return;
}
callback(null, instance, apiResponse);
}
this.getMetadata(function(err, metadata) {
if (err) {
if (err.code === 404 && autoCreate) {
const args = [];
if (!is.empty(config)) {
args.push(config);
}
args.push(onCreate);
self.create.apply(self, args);
return;
}
callback(err, null, metadata);
return;
}
callback(null, self, metadata);
});
};
/**
* Get the metadata of this object.
*
* @param {function} callback - The callback function.
* @param {?error} callback.err - An error returned while making this request.
* @param {object} callback.metadata - The metadata for this object.
* @param {object} callback.apiResponse - The full API response.
*/
ServiceObject.prototype.getMetadata = function(callback) {
const self = this;
const methodConfig = this.methods.getMetadata || {};
const reqOpts = extend(
{
uri: '',
},
methodConfig.reqOpts
);
// The `request` method may have been overridden to hold any special behavior.
// Ensure we call the original `request` method.
ServiceObject.prototype.request.call(this, reqOpts, function(err, resp) {
if (err) {
callback(err, null, resp);
return;
}
self.metadata = resp;
callback(null, self.metadata, resp);
});
};
/**
* Set the metadata for this object.
*
* @param {object} metadata - The metadata to set on this object.
* @param {function=} callback - The callback function.
* @param {?error} callback.err - An error returned while making this request.
* @param {object} callback.instance - The instance.
* @param {object} callback.apiResponse - The full API response.
*/
ServiceObject.prototype.setMetadata = function(metadata, callback) {
const self = this;
callback = callback || util.noop;
const methodConfig = this.methods.setMetadata || {};
const reqOpts = extend(
true,
{
method: 'PATCH',
uri: '',
json: metadata,
},
methodConfig.reqOpts
);
// The `request` method may have been overridden to hold any special behavior.
// Ensure we call the original `request` method.
ServiceObject.prototype.request.call(this, reqOpts, function(err, resp) {
if (err) {
callback(err, resp);
return;
}
self.metadata = resp;
callback(null, resp);
});
};
/**
* Make an authenticated API request.
*
* @private
*
* @param {object} reqOpts - Request options that are passed to `request`.
* @param {string} reqOpts.uri - A URI relative to the baseUrl.
* @param {function} callback - The callback function passed to `request`.
*/
ServiceObject.prototype.request_ = function(reqOpts, callback) {
reqOpts = extend(true, {}, reqOpts);
const isAbsoluteUrl = reqOpts.uri.indexOf('http') === 0;
const uriComponents = [this.baseUrl, this.id || '', reqOpts.uri];
if (isAbsoluteUrl) {
uriComponents.splice(0, uriComponents.indexOf(reqOpts.uri));
}
reqOpts.uri = uriComponents
.filter(exec('trim')) // Limit to non-empty strings.
.map(function(uriComponent) {
const trimSlashesRegex = /^\/*|\/*$/g;
return uriComponent.replace(trimSlashesRegex, '');
})
.join('/');
const childInterceptors = arrify(reqOpts.interceptors_);
const localInterceptors = [].slice.call(this.interceptors);
reqOpts.interceptors_ = childInterceptors.concat(localInterceptors);
if (!callback) {
return this.parent.requestStream(reqOpts);
}
this.parent.request(reqOpts, callback);
};
/**
* Make an authenticated API request.
*
* @private
*
* @param {object} reqOpts - Request options that are passed to `request`.
* @param {string} reqOpts.uri - A URI relative to the baseUrl.
* @param {function} callback - The callback function passed to `request`.
*/
ServiceObject.prototype.request = function(reqOpts, callback) {
ServiceObject.prototype.request_.call(this, reqOpts, callback);
};
/**
* Make an authenticated API request.
*
* @private
*
* @param {object} reqOpts - Request options that are passed to `request`.
* @param {string} reqOpts.uri - A URI relative to the baseUrl.
*/
ServiceObject.prototype.requestStream = function(reqOpts) {
return ServiceObject.prototype.request_.call(this, reqOpts);
};
/*! Developer Documentation
*
* All async methods (except for streams) will return a Promise in the event
* that a callback is omitted.
*/
util.promisifyAll(ServiceObject);
module.exports = ServiceObject;

View File

@ -0,0 +1,200 @@
/*!
* Copyright 2015 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*!
* @module common/service
*/
'use strict';
const arrify = require('arrify');
const extend = require('extend');
/**
* @type {module:common/util}
* @private
*/
const util = require('./util.js');
const PROJECT_ID_TOKEN = '{{projectId}}';
/**
* Service is a base class, meant to be inherited from by a "service," like
* BigQuery or Storage.
*
* This handles making authenticated requests by exposing a `makeReq_` function.
*
* @constructor
* @alias module:common/service
*
* @param {object} config - Configuration object.
* @param {string} config.baseUrl - The base URL to make API requests to.
* @param {string[]} config.scopes - The scopes required for the request.
* @param {object=} options - [Configuration object](#/docs).
*/
function Service(config, options) {
options = options || {};
util.privatize(this, 'baseUrl', config.baseUrl);
util.privatize(this, 'globalInterceptors', arrify(options.interceptors_));
util.privatize(this, 'interceptors', []);
util.privatize(this, 'packageJson', config.packageJson);
util.privatize(this, 'projectId', options.projectId || PROJECT_ID_TOKEN);
util.privatize(this, 'projectIdRequired', config.projectIdRequired !== false);
util.privatize(this, 'Promise', options.promise || Promise);
const reqCfg = extend({}, config, {
projectIdRequired: this.projectIdRequired,
projectId: this.projectId,
credentials: options.credentials,
keyFile: options.keyFilename,
email: options.email,
token: options.token,
});
util.privatize(
this,
'makeAuthenticatedRequest',
util.makeAuthenticatedRequestFactory(reqCfg)
);
util.privatize(this, 'authClient', this.makeAuthenticatedRequest.authClient);
util.privatize(
this,
'getCredentials',
this.makeAuthenticatedRequest.getCredentials
);
const isCloudFunctionEnv = !!process.env.FUNCTION_NAME;
if (isCloudFunctionEnv) {
this.interceptors.push({
request: function(reqOpts) {
reqOpts.forever = false;
return reqOpts;
},
});
}
}
/**
* Get and update the Service's project ID.
*
* @param {function} callback - The callback function.
*/
Service.prototype.getProjectId = function(callback) {
const self = this;
this.authClient.getProjectId(function(err, projectId) {
if (err) {
callback(err);
return;
}
if (self.projectId === PROJECT_ID_TOKEN && projectId) {
self.projectId = projectId;
}
callback(null, self.projectId);
});
};
/**
* Make an authenticated API request.
*
* @private
*
* @param {object} reqOpts - Request options that are passed to `request`.
* @param {string} reqOpts.uri - A URI relative to the baseUrl.
* @param {function} callback - The callback function passed to `request`.
*/
Service.prototype.request_ = function(reqOpts, callback) {
reqOpts = extend(true, {}, reqOpts);
const isAbsoluteUrl = reqOpts.uri.indexOf('http') === 0;
const uriComponents = [this.baseUrl];
if (this.projectIdRequired) {
uriComponents.push('projects');
uriComponents.push(this.projectId);
}
uriComponents.push(reqOpts.uri);
if (isAbsoluteUrl) {
uriComponents.splice(0, uriComponents.indexOf(reqOpts.uri));
}
reqOpts.uri = uriComponents
.map(function(uriComponent) {
const trimSlashesRegex = /^\/*|\/*$/g;
return uriComponent.replace(trimSlashesRegex, '');
})
.join('/')
// Some URIs have colon separators.
// Bad: https://.../projects/:list
// Good: https://.../projects:list
.replace(/\/:/g, ':');
// Interceptors should be called in the order they were assigned.
const combinedInterceptors = [].slice
.call(this.globalInterceptors)
.concat(this.interceptors)
.concat(arrify(reqOpts.interceptors_));
let interceptor;
while ((interceptor = combinedInterceptors.shift()) && interceptor.request) {
reqOpts = interceptor.request(reqOpts);
}
delete reqOpts.interceptors_;
const pkg = this.packageJson;
reqOpts.headers = extend({}, reqOpts.headers, {
'User-Agent': util.getUserAgentFromPackageJson(pkg),
'x-goog-api-client': `gl-node/${process.versions.node} gccl/${pkg.version}`,
});
return this.makeAuthenticatedRequest(reqOpts, callback);
};
/**
* Make an authenticated API request.
*
* @private
*
* @param {object} reqOpts - Request options that are passed to `request`.
* @param {string} reqOpts.uri - A URI relative to the baseUrl.
* @param {function} callback - The callback function passed to `request`.
*/
Service.prototype.request = function(reqOpts, callback) {
Service.prototype.request_.call(this, reqOpts, callback);
};
/**
* Make an authenticated API request.
*
* @private
*
* @param {object} reqOpts - Request options that are passed to `request`.
* @param {string} reqOpts.uri - A URI relative to the baseUrl.
*/
Service.prototype.requestStream = function(reqOpts) {
return Service.prototype.request_.call(this, reqOpts);
};
module.exports = Service;

View File

@ -0,0 +1,830 @@
/**
* Copyright 2014 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*!
* @module common/util
*/
'use strict';
const createErrorClass = require('create-error-class');
const duplexify = require('duplexify');
const ent = require('ent');
const extend = require('extend');
const googleAuth = require('google-auto-auth');
const is = require('is');
const request = require('request').defaults({
timeout: 60000,
gzip: true,
forever: true,
pool: {
maxSockets: Infinity,
},
});
const retryRequest = require('retry-request');
const streamEvents = require('stream-events');
const through = require('through2');
const uniq = require('array-uniq');
const util = module.exports;
/**
* Custom error type for missing project ID errors.
*/
util.MissingProjectIdError = createErrorClass(
'MissingProjectIdError',
function() {
this.message = `Sorry, we cannot connect to Cloud Services without a project
ID. You may specify one with an environment variable named
"GOOGLE_CLOUD_PROJECT".`.replace(/ +/g, ' ');
}
);
/**
* No op.
*
* @example
* function doSomething(callback) {
* callback = callback || noop;
* }
*/
function noop() {}
util.noop = noop;
/**
* Custom error type for API errors.
*
* @param {object} errorBody - Error object.
*/
util.ApiError = createErrorClass('ApiError', function(errorBody) {
this.code = errorBody.code;
this.errors = errorBody.errors;
this.response = errorBody.response;
try {
this.errors = JSON.parse(this.response.body).error.errors;
} catch (e) {
this.errors = errorBody.errors;
}
const messages = [];
if (errorBody.message) {
messages.push(errorBody.message);
}
if (this.errors && this.errors.length === 1) {
messages.push(this.errors[0].message);
} else if (this.response && this.response.body) {
messages.push(ent.decode(errorBody.response.body.toString()));
} else if (!errorBody.message) {
messages.push('Error during request.');
}
this.message = uniq(messages).join(' - ');
});
/**
* Custom error type for partial errors returned from the API.
*
* @param {object} b - Error object.
*/
util.PartialFailureError = createErrorClass('PartialFailureError', function(b) {
const errorObject = b;
this.errors = errorObject.errors;
this.response = errorObject.response;
const defaultErrorMessage = 'A failure occurred during this request.';
this.message = errorObject.message || defaultErrorMessage;
});
/**
* Uniformly process an API response.
*
* @param {*} err - Error value.
* @param {*} resp - Response value.
* @param {*} body - Body value.
* @param {function} callback - The callback function.
*/
function handleResp(err, resp, body, callback) {
callback = callback || util.noop;
const parsedResp = extend(
true,
{err: err || null},
resp && util.parseHttpRespMessage(resp),
body && util.parseHttpRespBody(body)
);
callback(parsedResp.err, parsedResp.body, parsedResp.resp);
}
util.handleResp = handleResp;
/**
* Sniff an incoming HTTP response message for errors.
*
* @param {object} httpRespMessage - An incoming HTTP response message from
* `request`.
* @return {object} parsedHttpRespMessage - The parsed response.
* @param {?error} parsedHttpRespMessage.err - An error detected.
* @param {object} parsedHttpRespMessage.resp - The original response object.
*/
function parseHttpRespMessage(httpRespMessage) {
const parsedHttpRespMessage = {
resp: httpRespMessage,
};
if (httpRespMessage.statusCode < 200 || httpRespMessage.statusCode > 299) {
// Unknown error. Format according to ApiError standard.
parsedHttpRespMessage.err = new util.ApiError({
errors: [],
code: httpRespMessage.statusCode,
message: httpRespMessage.statusMessage,
response: httpRespMessage,
});
}
return parsedHttpRespMessage;
}
util.parseHttpRespMessage = parseHttpRespMessage;
/**
* Parse the response body from an HTTP request.
*
* @param {object} body - The response body.
* @return {object} parsedHttpRespMessage - The parsed response.
* @param {?error} parsedHttpRespMessage.err - An error detected.
* @param {object} parsedHttpRespMessage.body - The original body value provided
* will try to be JSON.parse'd. If it's successful, the parsed value will be
* returned here, otherwise the original value.
*/
function parseHttpRespBody(body) {
const parsedHttpRespBody = {
body: body,
};
if (is.string(body)) {
try {
parsedHttpRespBody.body = JSON.parse(body);
} catch (err) {
parsedHttpRespBody.err = new util.ApiError('Cannot parse JSON response');
}
}
if (parsedHttpRespBody.body && parsedHttpRespBody.body.error) {
// Error from JSON API.
parsedHttpRespBody.err = new util.ApiError(parsedHttpRespBody.body.error);
}
return parsedHttpRespBody;
}
util.parseHttpRespBody = parseHttpRespBody;
/**
* Take a Duplexify stream, fetch an authenticated connection header, and create
* an outgoing writable stream.
*
* @param {Duplexify} dup - Duplexify stream.
* @param {object} options - Configuration object.
* @param {module:common/connection} options.connection - A connection instance,
* used to get a token with and send the request through.
* @param {object} options.metadata - Metadata to send at the head of the
* request.
* @param {object} options.request - Request object, in the format of a standard
* Node.js http.request() object.
* @param {string=} options.request.method - Default: "POST".
* @param {string=} options.request.qs.uploadType - Default: "multipart".
* @param {string=} options.streamContentType - Default:
* "application/octet-stream".
* @param {function} onComplete - Callback, executed after the writable Request
* stream has completed.
*/
function makeWritableStream(dup, options, onComplete) {
onComplete = onComplete || util.noop;
const writeStream = through();
dup.setWritable(writeStream);
const defaultReqOpts = {
method: 'POST',
qs: {
uploadType: 'multipart',
},
};
const metadata = options.metadata || {};
const reqOpts = extend(true, defaultReqOpts, options.request, {
multipart: [
{
'Content-Type': 'application/json',
body: JSON.stringify(metadata),
},
{
'Content-Type': metadata.contentType || 'application/octet-stream',
body: writeStream,
},
],
});
options.makeAuthenticatedRequest(reqOpts, {
onAuthenticated: function(err, authenticatedReqOpts) {
if (err) {
dup.destroy(err);
return;
}
request(authenticatedReqOpts, function(err, resp, body) {
util.handleResp(err, resp, body, function(err, data) {
if (err) {
dup.destroy(err);
return;
}
dup.emit('response', resp);
onComplete(data);
});
});
},
});
}
util.makeWritableStream = makeWritableStream;
/**
* Returns true if the API request should be retried, given the error that was
* given the first time the request was attempted. This is used for rate limit
* related errors as well as intermittent server errors.
*
* @param {error} err - The API error to check if it is appropriate to retry.
* @return {boolean} True if the API request should be retried, false otherwise.
*/
function shouldRetryRequest(err) {
if (err) {
if ([429, 500, 502, 503].indexOf(err.code) !== -1) {
return true;
}
if (err.errors) {
for (const i in err.errors) {
const reason = err.errors[i].reason;
if (reason === 'rateLimitExceeded') {
return true;
}
if (reason === 'userRateLimitExceeded') {
return true;
}
}
}
}
return false;
}
util.shouldRetryRequest = shouldRetryRequest;
/**
* Get a function for making authenticated requests.
*
* @throws {Error} If a projectId is requested, but not able to be detected.
*
* @param {object} config - Configuration object.
* @param {boolean=} config.autoRetry - Automatically retry requests if the
* response is related to rate limits or certain intermittent server errors.
* We will exponentially backoff subsequent requests by default. (default:
* true)
* @param {object=} config.credentials - Credentials object.
* @param {boolean=} config.customEndpoint - If true, just return the provided
* request options. Default: false.
* @param {string=} config.email - Account email address, required for PEM/P12
* usage.
* @param {number=} config.maxRetries - Maximum number of automatic retries
* attempted before returning the error. (default: 3)
* @param {string=} config.keyFile - Path to a .json, .pem, or .p12 keyfile.
* @param {array} config.scopes - Array of scopes required for the API.
*/
function makeAuthenticatedRequestFactory(config) {
config = config || {};
const googleAutoAuthConfig = extend({}, config);
if (googleAutoAuthConfig.projectId === '{{projectId}}') {
delete googleAutoAuthConfig.projectId;
}
const authClient = googleAuth(googleAutoAuthConfig);
/**
* The returned function that will make an authenticated request.
*
* @param {type} reqOpts - Request options in the format `request` expects.
* @param {object|function} options - Configuration object or callback
* function.
* @param {function=} options.onAuthenticated - If provided, a request will
* not be made. Instead, this function is passed the error & authenticated
* request options.
*/
function makeAuthenticatedRequest(reqOpts, options) {
let stream;
const reqConfig = extend({}, config);
let activeRequest_;
if (!options) {
stream = duplexify();
reqConfig.stream = stream;
}
function onAuthenticated(err, authenticatedReqOpts) {
const autoAuthFailed =
err &&
err.message.indexOf('Could not load the default credentials') > -1;
if (autoAuthFailed) {
// Even though authentication failed, the API might not actually care.
authenticatedReqOpts = reqOpts;
}
if (!err || autoAuthFailed) {
let projectId = authClient.projectId;
if (config.projectId && config.projectId !== '{{projectId}}') {
projectId = config.projectId;
}
try {
authenticatedReqOpts = util.decorateRequest(
authenticatedReqOpts,
projectId
);
err = null;
} catch (e) {
// A projectId was required, but we don't have one.
// Re-use the "Could not load the default credentials error" if auto
// auth failed.
err = err || e;
}
}
if (err) {
if (stream) {
stream.destroy(err);
} else {
(options.onAuthenticated || options)(err);
}
return;
}
if (options && options.onAuthenticated) {
options.onAuthenticated(null, authenticatedReqOpts);
} else {
activeRequest_ = util.makeRequest(
authenticatedReqOpts,
reqConfig,
options
);
}
}
if (reqConfig.customEndpoint) {
// Using a custom API override. Do not use `google-auto-auth` for
// authentication. (ex: connecting to a local Datastore server)
onAuthenticated(null, reqOpts);
} else {
authClient.authorizeRequest(reqOpts, onAuthenticated);
}
if (stream) {
return stream;
}
return {
abort: function() {
if (activeRequest_) {
activeRequest_.abort();
activeRequest_ = null;
}
},
};
}
makeAuthenticatedRequest.getCredentials = authClient.getCredentials.bind(
authClient
);
makeAuthenticatedRequest.authClient = authClient;
return makeAuthenticatedRequest;
}
util.makeAuthenticatedRequestFactory = makeAuthenticatedRequestFactory;
/**
* Make a request through the `retryRequest` module with built-in error handling
* and exponential back off.
*
* @param {object} reqOpts - Request options in the format `request` expects.
* @param {object=} config - Configuration object.
* @param {boolean=} config.autoRetry - Automatically retry requests if the
* response is related to rate limits or certain intermittent server errors.
* We will exponentially backoff subsequent requests by default. (default:
* true)
* @param {number=} config.maxRetries - Maximum number of automatic retries
* attempted before returning the error. (default: 3)
* @param {function} callback - The callback function.
*/
function makeRequest(reqOpts, config, callback) {
if (is.fn(config)) {
callback = config;
config = {};
}
config = config || {};
const options = {
request: request,
retries: config.autoRetry !== false ? config.maxRetries || 3 : 0,
shouldRetryFn: function(httpRespMessage) {
const err = util.parseHttpRespMessage(httpRespMessage).err;
return err && util.shouldRetryRequest(err);
},
};
if (config.stream) {
const dup = config.stream;
let requestStream;
const isGetRequest = (reqOpts.method || 'GET').toUpperCase() === 'GET';
if (isGetRequest) {
requestStream = retryRequest(reqOpts, options);
dup.setReadable(requestStream);
} else {
// Streaming writable HTTP requests cannot be retried.
requestStream = request(reqOpts);
dup.setWritable(requestStream);
}
// Replay the Request events back to the stream.
requestStream
.on('error', dup.destroy.bind(dup))
.on('response', dup.emit.bind(dup, 'response'))
.on('complete', dup.emit.bind(dup, 'complete'));
dup.abort = requestStream.abort;
} else {
return retryRequest(reqOpts, options, function(err, response, body) {
util.handleResp(err, response, body, callback);
});
}
}
util.makeRequest = makeRequest;
/**
* Decorate the options about to be made in a request.
*
* @param {object} reqOpts - The options to be passed to `request`.
* @param {string} projectId - The project ID.
* @return {object} reqOpts - The decorated reqOpts.
*/
function decorateRequest(reqOpts, projectId) {
delete reqOpts.autoPaginate;
delete reqOpts.autoPaginateVal;
delete reqOpts.objectMode;
if (is.object(reqOpts.qs)) {
delete reqOpts.qs.autoPaginate;
delete reqOpts.qs.autoPaginateVal;
reqOpts.qs = util.replaceProjectIdToken(reqOpts.qs, projectId);
}
if (is.object(reqOpts.json)) {
delete reqOpts.json.autoPaginate;
delete reqOpts.json.autoPaginateVal;
reqOpts.json = util.replaceProjectIdToken(reqOpts.json, projectId);
}
reqOpts.uri = util.replaceProjectIdToken(reqOpts.uri, projectId);
return reqOpts;
}
util.decorateRequest = decorateRequest;
/**
* Populate the `{{projectId}}` placeholder.
*
* @throws {Error} If a projectId is required, but one is not provided.
*
* @param {*} - Any input value that may contain a placeholder. Arrays and
* objects will be looped.
* @param {string} projectId - A projectId. If not provided
* @return {*} - The original argument with all placeholders populated.
*/
function replaceProjectIdToken(value, projectId) {
if (is.array(value)) {
value = value.map(function(val) {
return replaceProjectIdToken(val, projectId);
});
}
if (is.object(value) && is.fn(value.hasOwnProperty)) {
for (const opt in value) {
if (value.hasOwnProperty(opt)) {
value[opt] = replaceProjectIdToken(value[opt], projectId);
}
}
}
if (is.string(value) && value.indexOf('{{projectId}}') > -1) {
if (!projectId || projectId === '{{projectId}}') {
throw new util.MissingProjectIdError();
}
value = value.replace(/{{projectId}}/g, projectId);
}
return value;
}
util.replaceProjectIdToken = replaceProjectIdToken;
/**
* Extend a global configuration object with user options provided at the time
* of sub-module instantiation.
*
* Connection details currently come in two ways: `credentials` or
* `keyFilename`. Because of this, we have a special exception when overriding a
* global configuration object. If a user provides either to the global
* configuration, then provides another at submodule instantiation-time, the
* latter is preferred.
*
* @param {object} globalConfig - The global configuration object.
* @param {object=} overrides - The instantiation-time configuration object.
* @return {object}
*/
function extendGlobalConfig(globalConfig, overrides) {
globalConfig = globalConfig || {};
overrides = overrides || {};
const defaultConfig = {};
if (process.env.GCLOUD_PROJECT) {
defaultConfig.projectId = process.env.GCLOUD_PROJECT;
}
const options = extend({}, globalConfig);
const hasGlobalConnection = options.credentials || options.keyFilename;
const isOverridingConnection = overrides.credentials || overrides.keyFilename;
if (hasGlobalConnection && isOverridingConnection) {
delete options.credentials;
delete options.keyFilename;
}
const extendedConfig = extend(true, defaultConfig, options, overrides);
// Preserve the original (not cloned) interceptors.
extendedConfig.interceptors_ = globalConfig.interceptors_;
return extendedConfig;
}
util.extendGlobalConfig = extendGlobalConfig;
/**
* Merge and validate API configurations.
*
* @param {object} globalContext - gcloud-level context.
* @param {object} globalContext.config_ - gcloud-level configuration.
* @param {object} localConfig - Service-level configurations.
* @return {object} config - Merged and validated configuration.
*/
function normalizeArguments(globalContext, localConfig) {
const globalConfig = globalContext && globalContext.config_;
return util.extendGlobalConfig(globalConfig, localConfig);
}
util.normalizeArguments = normalizeArguments;
/**
* Limit requests according to a `maxApiCalls` limit.
*
* @param {function} makeRequestFn - The function that will be called.
* @param {object=} options - Configuration object.
* @param {number} options.maxApiCalls - The maximum number of API calls to
* make.
* @param {object} options.streamOptions - Options to pass to the Stream
* constructor.
*/
function createLimiter(makeRequestFn, options) {
options = options || {};
const stream = streamEvents(through.obj(options.streamOptions));
let requestsMade = 0;
let requestsToMake = -1;
if (is.number(options.maxApiCalls)) {
requestsToMake = options.maxApiCalls;
}
return {
makeRequest: function() {
requestsMade++;
if (requestsToMake >= 0 && requestsMade > requestsToMake) {
stream.push(null);
return;
}
makeRequestFn.apply(null, arguments);
return stream;
},
stream: stream,
};
}
util.createLimiter = createLimiter;
function isCustomType(unknown, module) {
function getConstructorName(obj) {
return obj.constructor && obj.constructor.name.toLowerCase();
}
const moduleNameParts = module.split('/');
const parentModuleName =
moduleNameParts[0] && moduleNameParts[0].toLowerCase();
const subModuleName = moduleNameParts[1] && moduleNameParts[1].toLowerCase();
if (subModuleName && getConstructorName(unknown) !== subModuleName) {
return false;
}
let walkingModule = unknown;
do {
if (getConstructorName(walkingModule) === parentModuleName) {
return true;
}
} while ((walkingModule = walkingModule.parent));
return false;
}
util.isCustomType = isCustomType;
/**
* Create a properly-formatted User-Agent string from a package.json file.
*
* @param {object} packageJson - A module's package.json file.
* @return {string} userAgent - The formatted User-Agent string.
*/
function getUserAgentFromPackageJson(packageJson) {
const hyphenatedPackageName = packageJson.name
.replace('@google-cloud', 'gcloud-node') // For legacy purposes.
.replace('/', '-'); // For UA spec-compliance purposes.
return hyphenatedPackageName + '/' + packageJson.version;
}
util.getUserAgentFromPackageJson = getUserAgentFromPackageJson;
/**
* Wraps a callback style function to conditionally return a promise.
*
* @param {function} originalMethod - The method to promisify.
* @param {object=} options - Promise options.
* @param {boolean} options.singular - Resolve the promise with single arg
* instead of an array.
* @return {function} wrapped
*/
function promisify(originalMethod, options) {
if (originalMethod.promisified_) {
return originalMethod;
}
options = options || {};
const slice = Array.prototype.slice;
const wrapper = function() {
const context = this;
let last;
for (last = arguments.length - 1; last >= 0; last--) {
const arg = arguments[last];
if (is.undefined(arg)) {
continue; // skip trailing undefined.
}
if (!is.fn(arg)) {
break; // non-callback last argument found.
}
return originalMethod.apply(context, arguments);
}
// peel trailing undefined.
const args = slice.call(arguments, 0, last + 1);
let PromiseCtor = Promise;
// Because dedupe will likely create a single install of
// @google-cloud/common to be shared amongst all modules, we need to
// localize it at the Service level.
if (context && context.Promise) {
PromiseCtor = context.Promise;
}
return new PromiseCtor(function(resolve, reject) {
args.push(function() {
const callbackArgs = slice.call(arguments);
const err = callbackArgs.shift();
if (err) {
return reject(err);
}
if (options.singular && callbackArgs.length === 1) {
resolve(callbackArgs[0]);
} else {
resolve(callbackArgs);
}
});
originalMethod.apply(context, args);
});
};
wrapper.promisified_ = true;
return wrapper;
}
util.promisify = promisify;
/**
* Promisifies certain Class methods. This will not promisify private or
* streaming methods.
*
* @param {module:common/service} Class - Service class.
* @param {object=} options - Configuration object.
*/
function promisifyAll(Class, options) {
const exclude = (options && options.exclude) || [];
const methods = Object.keys(Class.prototype).filter(function(methodName) {
return (
is.fn(Class.prototype[methodName]) && // is it a function?
!/(^_|(Stream|_)|promise$)/.test(methodName) && // is it promisable?
exclude.indexOf(methodName) === -1
); // is it blacklisted?
});
methods.forEach(function(methodName) {
const originalMethod = Class.prototype[methodName];
if (!originalMethod.promisified_) {
Class.prototype[methodName] = util.promisify(originalMethod, options);
}
});
}
util.promisifyAll = promisifyAll;
/**
* This will mask properties of an object from console.log.
*
* @param {object} object - The object to assign the property to.
* @param {string} propName - Property name.
* @param {*} value - Value.
*/
function privatize(object, propName, value) {
Object.defineProperty(object, propName, {value, writable: true});
}
util.privatize = privatize;

View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,141 @@
<img src="https://avatars2.githubusercontent.com/u/2810941?v=3&s=96" alt="Google Cloud Platform logo" title="Google Cloud Platform" align="right" height="96" width="96"/>
# [Google Cloud Firestore: Node.js Server SDK](https://github.com/googleapis/nodejs-firestore)
[![release level](https://img.shields.io/badge/release%20level-beta-yellow.svg?style&#x3D;flat)](https://cloud.google.com/terms/launch-stages)
[![CircleCI](https://img.shields.io/circleci/project/github/googleapis/nodejs-firestore.svg?style=flat)](https://circleci.com/gh/googleapis/nodejs-firestore)
[![AppVeyor](https://ci.appveyor.com/api/projects/status/github/googleapis/nodejs-firestore?branch=master&svg=true)](https://ci.appveyor.com/project/googleapis/nodejs-firestore)
[![codecov](https://img.shields.io/codecov/c/github/googleapis/nodejs-firestore/master.svg?style=flat)](https://codecov.io/gh/googleapis/nodejs-firestore)
This is the Node.js Server SDK for
[Google Cloud Firestore](https://firebase.google.com/docs/firestore/). Google
Cloud Firestore is a NoSQL document database built for automatic scaling, high
performance, and ease of application development.
This Cloud Firestore Server SDK uses Googles [Cloud Identity and Access
Management](https://cloud.google.com/firestore/docs/security/iam) for
authentication and should only be used **in trusted environments**. Your Cloud
Identity credentials allow you bypass all access restrictions and provide read
and write access to all data in your Cloud Firestore project.
The Cloud Firestore Server SDKs are designed to manage the full set of data in
your Cloud Firestore project and work best with reliable network connectivity.
Data operations performed via these SDKs directly access the Cloud Firestore
backend and all document reads and writes are optimized for high throughput.
Applications that use Google's Server SDKs should not be used in end-user
environments, such as on phones or on publicly hosted websites. If you are
developing a Web or Node.js application that accesses Cloud Firestore on behalf
of end users, use the [`firebase`](https://www.npmjs.com/package/firebase)
Client SDK.
**Table of contents:**
* [Quickstart](#quickstart)
* [Before you begin](#before-you-begin)
* [Installing the client library](#installing-the-client-library)
* [Using the client library](#using-the-client-library)
* [Versioning](#versioning)
* [Contributing](#contributing)
* [License](#license)
## Quickstart
Read more about the client libraries for Cloud APIs, including the older
Google APIs Client Libraries, in [Client Libraries Explained][explained].
[explained]: https://cloud.google.com/apis/docs/client-libraries-explained
* [Cloud Firestore Node.js Client API Reference][client-docs]
* [github.com/googleapis/nodejs-firestore](https://github.com/googleapis/nodejs-firestore)
* [Cloud Firestore Documentation][product-docs]
### Before you begin
1. Select or create a Cloud Platform project.
[Go to the projects page][projects]
1. Enable the Google Cloud Firestore API.
[Enable the API][enable_api]
1. [Set up authentication with a service account][auth] so you can access the
API from your local workstation.
[projects]: https://console.cloud.google.com/project
[enable_api]: https://console.cloud.google.com/flows/enableapi?apiid=firestore.googleapis.com
[auth]: https://cloud.google.com/docs/authentication/getting-started
### Installing the client library
npm install --save @google-cloud/firestore
### Using the client library
```javascript
const Firestore = require('@google-cloud/firestore');
const firestore = new Firestore({
projectId: 'YOUR_PROJECT_ID',
keyFilename: '/path/to/keyfile.json',
});
const document = firestore.doc('posts/intro-to-firestore');
// Enter new data into the document.
document.set({
title: 'Welcome to Firestore',
body: 'Hello World',
}).then(() => {
// Document created successfully.
});
// Update an existing document.
document.update({
body: 'My first Firestore app',
}).then(() => {
// Document updated successfully.
});
// Read the document.
document.get().then(doc => {
// Document read successfully.
});
// Delete the document.
document.delete().then(() => {
// Document deleted successfully.
});
```
The [Cloud Firestore Node.js Client API Reference][client-docs] documentation
also contains samples.
## Versioning
This library follows [Semantic Versioning](http://semver.org/).
This library is considered to be in **beta**. This means it is expected to be
mostly stable while we work toward a general availability release; however,
complete stability is not guaranteed. We will address issues and requests
against beta libraries with a high priority.
More Information: [Google Cloud Platform Launch Stages][launch_stages]
[launch_stages]: https://cloud.google.com/terms/launch-stages
## Contributing
Contributions welcome! See the [Contributing Guide](https://github.com/googleapis/nodejs-firestore/blob/master/.github/CONTRIBUTING.md).
## License
Apache Version 2.0
See [LICENSE](https://github.com/googleapis/nodejs-firestore/blob/master/LICENSE)
[client-docs]: https://cloud.google.com/nodejs/docs/reference/firestore/latest/
[product-docs]: https://firebase.google.com/docs/firestore/
[shell_img]: //gstatic.com/cloudssh/images/open-btn.png

View File

@ -0,0 +1,33 @@
The `firestore_proto_api.d.ts` and `firestore_proto_api.js` are generated from the Firestore
protofbuf definitions. While there are certainly more proper ways of doing this, the following
process was used:
- Check out `google-proto-files` from NPM.
```
npm install --save google-proto-files
```
- Copy the `google/firestore/v1beta/*.proto` and place them in current folder.
```
cp node_modules/google-proto-files/google/firestore/v1beta1/*.proto .
```
- Remove the `google/firestore/v1beta` path from any import statements in these files.
```
sed -i '' 's/import \"google\/firestore\/v1beta1\//import \"/g' *.proto
```
- Generate the output files
```
mkdir -p out && pbjs --proto_path node_modules/google-proto-files -t static-module -w commonjs -o out/firestore_proto_api.js firestore.proto && pbts -o out/firestore_proto_api.d.ts out/firestore_proto_api.js
```
- Edit the type used for integers greater than 2^53. The GRPC settings we use represent them as
Strings, but pbjs uses the "Long" type.
```
sed -i '' 's/number\|Long/number\|string \"/g' firestore_proto_api.d.ts firestore_proto_api.js
```
TODO: Find a way to properly specify the proto paths for the Firestore imports so this can be automated.

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,83 @@
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto3";
package google.firestore.v1beta1;
import "google/api/annotations.proto";
import "google/protobuf/timestamp.proto";
option csharp_namespace = "Google.Cloud.Firestore.V1Beta1";
option go_package = "google.golang.org/genproto/googleapis/firestore/v1beta1;firestore";
option java_multiple_files = true;
option java_outer_classname = "CommonProto";
option java_package = "com.google.firestore.v1beta1";
option objc_class_prefix = "GCFS";
option php_namespace = "Google\\Cloud\\Firestore\\V1beta1";
// A set of field paths on a document.
// Used to restrict a get or update operation on a document to a subset of its
// fields.
// This is different from standard field masks, as this is always scoped to a
// [Document][google.firestore.v1beta1.Document], and takes in account the dynamic nature of [Value][google.firestore.v1beta1.Value].
message DocumentMask {
// The list of field paths in the mask. See [Document.fields][google.firestore.v1beta1.Document.fields] for a field
// path syntax reference.
repeated string field_paths = 1;
}
// A precondition on a document, used for conditional operations.
message Precondition {
// The type of precondition.
oneof condition_type {
// When set to `true`, the target document must exist.
// When set to `false`, the target document must not exist.
bool exists = 1;
// When set, the target document must exist and have been last updated at
// that time.
google.protobuf.Timestamp update_time = 2;
}
}
// Options for creating a new transaction.
message TransactionOptions {
// Options for a transaction that can be used to read and write documents.
message ReadWrite {
// An optional transaction to retry.
bytes retry_transaction = 1;
}
// Options for a transaction that can only be used to read documents.
message ReadOnly {
// The consistency mode for this transaction. If not set, defaults to strong
// consistency.
oneof consistency_selector {
// Reads documents at the given time.
// This may not be older than 60 seconds.
google.protobuf.Timestamp read_time = 2;
}
}
// The mode of the transaction.
oneof mode {
// The transaction can only be used for read operations.
ReadOnly read_only = 2;
// The transaction can be used for both read and write operations.
ReadWrite read_write = 3;
}
}

View File

@ -0,0 +1,150 @@
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto3";
package google.firestore.v1beta1;
import "google/api/annotations.proto";
import "google/protobuf/struct.proto";
import "google/protobuf/timestamp.proto";
import "google/type/latlng.proto";
option csharp_namespace = "Google.Cloud.Firestore.V1Beta1";
option go_package = "google.golang.org/genproto/googleapis/firestore/v1beta1;firestore";
option java_multiple_files = true;
option java_outer_classname = "DocumentProto";
option java_package = "com.google.firestore.v1beta1";
option objc_class_prefix = "GCFS";
option php_namespace = "Google\\Cloud\\Firestore\\V1beta1";
// A Firestore document.
//
// Must not exceed 1 MiB - 4 bytes.
message Document {
// The resource name of the document, for example
// `projects/{project_id}/databases/{database_id}/documents/{document_path}`.
string name = 1;
// The document's fields.
//
// The map keys represent field names.
//
// A simple field name contains only characters `a` to `z`, `A` to `Z`,
// `0` to `9`, or `_`, and must not start with `0` to `9`. For example,
// `foo_bar_17`.
//
// Field names matching the regular expression `__.*__` are reserved. Reserved
// field names are forbidden except in certain documented contexts. The map
// keys, represented as UTF-8, must not exceed 1,500 bytes and cannot be
// empty.
//
// Field paths may be used in other contexts to refer to structured fields
// defined here. For `map_value`, the field path is represented by the simple
// or quoted field names of the containing fields, delimited by `.`. For
// example, the structured field
// `"foo" : { map_value: { "x&y" : { string_value: "hello" }}}` would be
// represented by the field path `foo.x&y`.
//
// Within a field path, a quoted field name starts and ends with `` ` `` and
// may contain any character. Some characters, including `` ` ``, must be
// escaped using a `\`. For example, `` `x&y` `` represents `x&y` and
// `` `bak\`tik` `` represents `` bak`tik ``.
map<string, Value> fields = 2;
// Output only. The time at which the document was created.
//
// This value increases monotonically when a document is deleted then
// recreated. It can also be compared to values from other documents and
// the `read_time` of a query.
google.protobuf.Timestamp create_time = 3;
// Output only. The time at which the document was last changed.
//
// This value is initially set to the `create_time` then increases
// monotonically with each change to the document. It can also be
// compared to values from other documents and the `read_time` of a query.
google.protobuf.Timestamp update_time = 4;
}
// A message that can hold any of the supported value types.
message Value {
// Must have a value set.
oneof value_type {
// A null value.
google.protobuf.NullValue null_value = 11;
// A boolean value.
bool boolean_value = 1;
// An integer value.
int64 integer_value = 2;
// A double value.
double double_value = 3;
// A timestamp value.
//
// Precise only to microseconds. When stored, any additional precision is
// rounded down.
google.protobuf.Timestamp timestamp_value = 10;
// A string value.
//
// The string, represented as UTF-8, must not exceed 1 MiB - 89 bytes.
// Only the first 1,500 bytes of the UTF-8 representation are considered by
// queries.
string string_value = 17;
// A bytes value.
//
// Must not exceed 1 MiB - 89 bytes.
// Only the first 1,500 bytes are considered by queries.
bytes bytes_value = 18;
// A reference to a document. For example:
// `projects/{project_id}/databases/{database_id}/documents/{document_path}`.
string reference_value = 5;
// A geo point value representing a point on the surface of Earth.
google.type.LatLng geo_point_value = 8;
// An array value.
//
// Cannot directly contain another array value, though can contain an
// map which contains another array.
ArrayValue array_value = 9;
// A map value.
MapValue map_value = 6;
}
}
// An array value.
message ArrayValue {
// Values in the array.
repeated Value values = 1;
}
// A map value.
message MapValue {
// The map's fields.
//
// The map keys represent field names. Field names matching the regular
// expression `__.*__` are reserved. Reserved field names are forbidden except
// in certain documented contexts. The map keys, represented as UTF-8, must
// not exceed 1,500 bytes and cannot be empty.
map<string, Value> fields = 1;
}

View File

@ -0,0 +1,760 @@
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto3";
package google.firestore.v1beta1;
import "google/api/annotations.proto";
import "google/firestore/v1beta1/common.proto";
import "google/firestore/v1beta1/document.proto";
import "google/firestore/v1beta1/query.proto";
import "google/firestore/v1beta1/write.proto";
import "google/protobuf/empty.proto";
import "google/protobuf/timestamp.proto";
import "google/rpc/status.proto";
option csharp_namespace = "Google.Cloud.Firestore.V1Beta1";
option go_package = "google.golang.org/genproto/googleapis/firestore/v1beta1;firestore";
option java_multiple_files = true;
option java_outer_classname = "FirestoreProto";
option java_package = "com.google.firestore.v1beta1";
option objc_class_prefix = "GCFS";
option php_namespace = "Google\\Cloud\\Firestore\\V1beta1";
// Specification of the Firestore API.
// The Cloud Firestore service.
//
// This service exposes several types of comparable timestamps:
//
// * `create_time` - The time at which a document was created. Changes only
// when a document is deleted, then re-created. Increases in a strict
// monotonic fashion.
// * `update_time` - The time at which a document was last updated. Changes
// every time a document is modified. Does not change when a write results
// in no modifications. Increases in a strict monotonic fashion.
// * `read_time` - The time at which a particular state was observed. Used
// to denote a consistent snapshot of the database or the time at which a
// Document was observed to not exist.
// * `commit_time` - The time at which the writes in a transaction were
// committed. Any read with an equal or greater `read_time` is guaranteed
// to see the effects of the transaction.
service Firestore {
// Gets a single document.
rpc GetDocument(GetDocumentRequest) returns (Document) {
option (google.api.http) = {
get: "/v1beta1/{name=projects/*/databases/*/documents/*/**}"
};
}
// Lists documents.
rpc ListDocuments(ListDocumentsRequest) returns (ListDocumentsResponse) {
option (google.api.http) = {
get: "/v1beta1/{parent=projects/*/databases/*/documents/*/**}/{collection_id}"
};
}
// Creates a new document.
rpc CreateDocument(CreateDocumentRequest) returns (Document) {
option (google.api.http) = {
post: "/v1beta1/{parent=projects/*/databases/*/documents/**}/{collection_id}"
body: "document"
};
}
// Updates or inserts a document.
rpc UpdateDocument(UpdateDocumentRequest) returns (Document) {
option (google.api.http) = {
patch: "/v1beta1/{document.name=projects/*/databases/*/documents/*/**}"
body: "document"
};
}
// Deletes a document.
rpc DeleteDocument(DeleteDocumentRequest) returns (google.protobuf.Empty) {
option (google.api.http) = {
delete: "/v1beta1/{name=projects/*/databases/*/documents/*/**}"
};
}
// Gets multiple documents.
//
// Documents returned by this method are not guaranteed to be returned in the
// same order that they were requested.
rpc BatchGetDocuments(BatchGetDocumentsRequest) returns (stream BatchGetDocumentsResponse) {
option (google.api.http) = {
post: "/v1beta1/{database=projects/*/databases/*}/documents:batchGet"
body: "*"
};
}
// Starts a new transaction.
rpc BeginTransaction(BeginTransactionRequest) returns (BeginTransactionResponse) {
option (google.api.http) = {
post: "/v1beta1/{database=projects/*/databases/*}/documents:beginTransaction"
body: "*"
};
}
// Commits a transaction, while optionally updating documents.
rpc Commit(CommitRequest) returns (CommitResponse) {
option (google.api.http) = {
post: "/v1beta1/{database=projects/*/databases/*}/documents:commit"
body: "*"
};
}
// Rolls back a transaction.
rpc Rollback(RollbackRequest) returns (google.protobuf.Empty) {
option (google.api.http) = {
post: "/v1beta1/{database=projects/*/databases/*}/documents:rollback"
body: "*"
};
}
// Runs a query.
rpc RunQuery(RunQueryRequest) returns (stream RunQueryResponse) {
option (google.api.http) = {
post: "/v1beta1/{parent=projects/*/databases/*/documents}:runQuery"
body: "*"
additional_bindings {
post: "/v1beta1/{parent=projects/*/databases/*/documents/*/**}:runQuery"
body: "*"
}
};
}
// Streams batches of document updates and deletes, in order.
rpc Write(stream WriteRequest) returns (stream WriteResponse) {
option (google.api.http) = {
post: "/v1beta1/{database=projects/*/databases/*}/documents:write"
body: "*"
};
}
// Listens to changes.
rpc Listen(stream ListenRequest) returns (stream ListenResponse) {
option (google.api.http) = {
post: "/v1beta1/{database=projects/*/databases/*}/documents:listen"
body: "*"
};
}
// Lists all the collection IDs underneath a document.
rpc ListCollectionIds(ListCollectionIdsRequest) returns (ListCollectionIdsResponse) {
option (google.api.http) = {
post: "/v1beta1/{parent=projects/*/databases/*/documents}:listCollectionIds"
body: "*"
additional_bindings {
post: "/v1beta1/{parent=projects/*/databases/*/documents/*/**}:listCollectionIds"
body: "*"
}
};
}
}
// The request for [Firestore.GetDocument][google.firestore.v1beta1.Firestore.GetDocument].
message GetDocumentRequest {
// The resource name of the Document to get. In the format:
// `projects/{project_id}/databases/{database_id}/documents/{document_path}`.
string name = 1;
// The fields to return. If not set, returns all fields.
//
// If the document has a field that is not present in this mask, that field
// will not be returned in the response.
DocumentMask mask = 2;
// The consistency mode for this transaction.
// If not set, defaults to strong consistency.
oneof consistency_selector {
// Reads the document in a transaction.
bytes transaction = 3;
// Reads the version of the document at the given time.
// This may not be older than 60 seconds.
google.protobuf.Timestamp read_time = 5;
}
}
// The request for [Firestore.ListDocuments][google.firestore.v1beta1.Firestore.ListDocuments].
message ListDocumentsRequest {
// The parent resource name. In the format:
// `projects/{project_id}/databases/{database_id}/documents` or
// `projects/{project_id}/databases/{database_id}/documents/{document_path}`.
// For example:
// `projects/my-project/databases/my-database/documents` or
// `projects/my-project/databases/my-database/documents/chatrooms/my-chatroom`
string parent = 1;
// The collection ID, relative to `parent`, to list. For example: `chatrooms`
// or `messages`.
string collection_id = 2;
// The maximum number of documents to return.
int32 page_size = 3;
// The `next_page_token` value returned from a previous List request, if any.
string page_token = 4;
// The order to sort results by. For example: `priority desc, name`.
string order_by = 6;
// The fields to return. If not set, returns all fields.
//
// If a document has a field that is not present in this mask, that field
// will not be returned in the response.
DocumentMask mask = 7;
// The consistency mode for this transaction.
// If not set, defaults to strong consistency.
oneof consistency_selector {
// Reads documents in a transaction.
bytes transaction = 8;
// Reads documents as they were at the given time.
// This may not be older than 60 seconds.
google.protobuf.Timestamp read_time = 10;
}
// If the list should show missing documents. A missing document is a
// document that does not exist but has sub-documents. These documents will
// be returned with a key but will not have fields, [Document.create_time][google.firestore.v1beta1.Document.create_time],
// or [Document.update_time][google.firestore.v1beta1.Document.update_time] set.
//
// Requests with `show_missing` may not specify `where` or
// `order_by`.
bool show_missing = 12;
}
// The response for [Firestore.ListDocuments][google.firestore.v1beta1.Firestore.ListDocuments].
message ListDocumentsResponse {
// The Documents found.
repeated Document documents = 1;
// The next page token.
string next_page_token = 2;
}
// The request for [Firestore.CreateDocument][google.firestore.v1beta1.Firestore.CreateDocument].
message CreateDocumentRequest {
// The parent resource. For example:
// `projects/{project_id}/databases/{database_id}/documents` or
// `projects/{project_id}/databases/{database_id}/documents/chatrooms/{chatroom_id}`
string parent = 1;
// The collection ID, relative to `parent`, to list. For example: `chatrooms`.
string collection_id = 2;
// The client-assigned document ID to use for this document.
//
// Optional. If not specified, an ID will be assigned by the service.
string document_id = 3;
// The document to create. `name` must not be set.
Document document = 4;
// The fields to return. If not set, returns all fields.
//
// If the document has a field that is not present in this mask, that field
// will not be returned in the response.
DocumentMask mask = 5;
}
// The request for [Firestore.UpdateDocument][google.firestore.v1beta1.Firestore.UpdateDocument].
message UpdateDocumentRequest {
// The updated document.
// Creates the document if it does not already exist.
Document document = 1;
// The fields to update.
// None of the field paths in the mask may contain a reserved name.
//
// If the document exists on the server and has fields not referenced in the
// mask, they are left unchanged.
// Fields referenced in the mask, but not present in the input document, are
// deleted from the document on the server.
DocumentMask update_mask = 2;
// The fields to return. If not set, returns all fields.
//
// If the document has a field that is not present in this mask, that field
// will not be returned in the response.
DocumentMask mask = 3;
// An optional precondition on the document.
// The request will fail if this is set and not met by the target document.
Precondition current_document = 4;
}
// The request for [Firestore.DeleteDocument][google.firestore.v1beta1.Firestore.DeleteDocument].
message DeleteDocumentRequest {
// The resource name of the Document to delete. In the format:
// `projects/{project_id}/databases/{database_id}/documents/{document_path}`.
string name = 1;
// An optional precondition on the document.
// The request will fail if this is set and not met by the target document.
Precondition current_document = 2;
}
// The request for [Firestore.BatchGetDocuments][google.firestore.v1beta1.Firestore.BatchGetDocuments].
message BatchGetDocumentsRequest {
// The database name. In the format:
// `projects/{project_id}/databases/{database_id}`.
string database = 1;
// The names of the documents to retrieve. In the format:
// `projects/{project_id}/databases/{database_id}/documents/{document_path}`.
// The request will fail if any of the document is not a child resource of the
// given `database`. Duplicate names will be elided.
repeated string documents = 2;
// The fields to return. If not set, returns all fields.
//
// If a document has a field that is not present in this mask, that field will
// not be returned in the response.
DocumentMask mask = 3;
// The consistency mode for this transaction.
// If not set, defaults to strong consistency.
oneof consistency_selector {
// Reads documents in a transaction.
bytes transaction = 4;
// Starts a new transaction and reads the documents.
// Defaults to a read-only transaction.
// The new transaction ID will be returned as the first response in the
// stream.
TransactionOptions new_transaction = 5;
// Reads documents as they were at the given time.
// This may not be older than 60 seconds.
google.protobuf.Timestamp read_time = 7;
}
}
// The streamed response for [Firestore.BatchGetDocuments][google.firestore.v1beta1.Firestore.BatchGetDocuments].
message BatchGetDocumentsResponse {
// A single result.
// This can be empty if the server is just returning a transaction.
oneof result {
// A document that was requested.
Document found = 1;
// A document name that was requested but does not exist. In the format:
// `projects/{project_id}/databases/{database_id}/documents/{document_path}`.
string missing = 2;
}
// The transaction that was started as part of this request.
// Will only be set in the first response, and only if
// [BatchGetDocumentsRequest.new_transaction][google.firestore.v1beta1.BatchGetDocumentsRequest.new_transaction] was set in the request.
bytes transaction = 3;
// The time at which the document was read.
// This may be monotically increasing, in this case the previous documents in
// the result stream are guaranteed not to have changed between their
// read_time and this one.
google.protobuf.Timestamp read_time = 4;
}
// The request for [Firestore.BeginTransaction][google.firestore.v1beta1.Firestore.BeginTransaction].
message BeginTransactionRequest {
// The database name. In the format:
// `projects/{project_id}/databases/{database_id}`.
string database = 1;
// The options for the transaction.
// Defaults to a read-write transaction.
TransactionOptions options = 2;
}
// The response for [Firestore.BeginTransaction][google.firestore.v1beta1.Firestore.BeginTransaction].
message BeginTransactionResponse {
// The transaction that was started.
bytes transaction = 1;
}
// The request for [Firestore.Commit][google.firestore.v1beta1.Firestore.Commit].
message CommitRequest {
// The database name. In the format:
// `projects/{project_id}/databases/{database_id}`.
string database = 1;
// The writes to apply.
//
// Always executed atomically and in order.
repeated Write writes = 2;
// If set, applies all writes in this transaction, and commits it.
bytes transaction = 3;
}
// The response for [Firestore.Commit][google.firestore.v1beta1.Firestore.Commit].
message CommitResponse {
// The result of applying the writes.
//
// This i-th write result corresponds to the i-th write in the
// request.
repeated WriteResult write_results = 1;
// The time at which the commit occurred.
google.protobuf.Timestamp commit_time = 2;
}
// The request for [Firestore.Rollback][google.firestore.v1beta1.Firestore.Rollback].
message RollbackRequest {
// The database name. In the format:
// `projects/{project_id}/databases/{database_id}`.
string database = 1;
// The transaction to roll back.
bytes transaction = 2;
}
// The request for [Firestore.RunQuery][google.firestore.v1beta1.Firestore.RunQuery].
message RunQueryRequest {
// The parent resource name. In the format:
// `projects/{project_id}/databases/{database_id}/documents` or
// `projects/{project_id}/databases/{database_id}/documents/{document_path}`.
// For example:
// `projects/my-project/databases/my-database/documents` or
// `projects/my-project/databases/my-database/documents/chatrooms/my-chatroom`
string parent = 1;
// The query to run.
oneof query_type {
// A structured query.
StructuredQuery structured_query = 2;
}
// The consistency mode for this transaction.
// If not set, defaults to strong consistency.
oneof consistency_selector {
// Reads documents in a transaction.
bytes transaction = 5;
// Starts a new transaction and reads the documents.
// Defaults to a read-only transaction.
// The new transaction ID will be returned as the first response in the
// stream.
TransactionOptions new_transaction = 6;
// Reads documents as they were at the given time.
// This may not be older than 60 seconds.
google.protobuf.Timestamp read_time = 7;
}
}
// The response for [Firestore.RunQuery][google.firestore.v1beta1.Firestore.RunQuery].
message RunQueryResponse {
// The transaction that was started as part of this request.
// Can only be set in the first response, and only if
// [RunQueryRequest.new_transaction][google.firestore.v1beta1.RunQueryRequest.new_transaction] was set in the request.
// If set, no other fields will be set in this response.
bytes transaction = 2;
// A query result.
// Not set when reporting partial progress.
Document document = 1;
// The time at which the document was read. This may be monotonically
// increasing; in this case, the previous documents in the result stream are
// guaranteed not to have changed between their `read_time` and this one.
//
// If the query returns no results, a response with `read_time` and no
// `document` will be sent, and this represents the time at which the query
// was run.
google.protobuf.Timestamp read_time = 3;
// The number of results that have been skipped due to an offset between
// the last response and the current response.
int32 skipped_results = 4;
}
// The request for [Firestore.Write][google.firestore.v1beta1.Firestore.Write].
//
// The first request creates a stream, or resumes an existing one from a token.
//
// When creating a new stream, the server replies with a response containing
// only an ID and a token, to use in the next request.
//
// When resuming a stream, the server first streams any responses later than the
// given token, then a response containing only an up-to-date token, to use in
// the next request.
message WriteRequest {
// The database name. In the format:
// `projects/{project_id}/databases/{database_id}`.
// This is only required in the first message.
string database = 1;
// The ID of the write stream to resume.
// This may only be set in the first message. When left empty, a new write
// stream will be created.
string stream_id = 2;
// The writes to apply.
//
// Always executed atomically and in order.
// This must be empty on the first request.
// This may be empty on the last request.
// This must not be empty on all other requests.
repeated Write writes = 3;
// A stream token that was previously sent by the server.
//
// The client should set this field to the token from the most recent
// [WriteResponse][google.firestore.v1beta1.WriteResponse] it has received. This acknowledges that the client has
// received responses up to this token. After sending this token, earlier
// tokens may not be used anymore.
//
// The server may close the stream if there are too many unacknowledged
// responses.
//
// Leave this field unset when creating a new stream. To resume a stream at
// a specific point, set this field and the `stream_id` field.
//
// Leave this field unset when creating a new stream.
bytes stream_token = 4;
// Labels associated with this write request.
map<string, string> labels = 5;
}
// The response for [Firestore.Write][google.firestore.v1beta1.Firestore.Write].
message WriteResponse {
// The ID of the stream.
// Only set on the first message, when a new stream was created.
string stream_id = 1;
// A token that represents the position of this response in the stream.
// This can be used by a client to resume the stream at this point.
//
// This field is always set.
bytes stream_token = 2;
// The result of applying the writes.
//
// This i-th write result corresponds to the i-th write in the
// request.
repeated WriteResult write_results = 3;
// The time at which the commit occurred.
google.protobuf.Timestamp commit_time = 4;
}
// A request for [Firestore.Listen][google.firestore.v1beta1.Firestore.Listen]
message ListenRequest {
// The database name. In the format:
// `projects/{project_id}/databases/{database_id}`.
string database = 1;
// The supported target changes.
oneof target_change {
// A target to add to this stream.
Target add_target = 2;
// The ID of a target to remove from this stream.
int32 remove_target = 3;
}
// Labels associated with this target change.
map<string, string> labels = 4;
}
// The response for [Firestore.Listen][google.firestore.v1beta1.Firestore.Listen].
message ListenResponse {
// The supported responses.
oneof response_type {
// Targets have changed.
TargetChange target_change = 2;
// A [Document][google.firestore.v1beta1.Document] has changed.
DocumentChange document_change = 3;
// A [Document][google.firestore.v1beta1.Document] has been deleted.
DocumentDelete document_delete = 4;
// A [Document][google.firestore.v1beta1.Document] has been removed from a target (because it is no longer
// relevant to that target).
DocumentRemove document_remove = 6;
// A filter to apply to the set of documents previously returned for the
// given target.
//
// Returned when documents may have been removed from the given target, but
// the exact documents are unknown.
ExistenceFilter filter = 5;
}
}
// A specification of a set of documents to listen to.
message Target {
// A target specified by a set of documents names.
message DocumentsTarget {
// The names of the documents to retrieve. In the format:
// `projects/{project_id}/databases/{database_id}/documents/{document_path}`.
// The request will fail if any of the document is not a child resource of
// the given `database`. Duplicate names will be elided.
repeated string documents = 2;
}
// A target specified by a query.
message QueryTarget {
// The parent resource name. In the format:
// `projects/{project_id}/databases/{database_id}/documents` or
// `projects/{project_id}/databases/{database_id}/documents/{document_path}`.
// For example:
// `projects/my-project/databases/my-database/documents` or
// `projects/my-project/databases/my-database/documents/chatrooms/my-chatroom`
string parent = 1;
// The query to run.
oneof query_type {
// A structured query.
StructuredQuery structured_query = 2;
}
}
// The type of target to listen to.
oneof target_type {
// A target specified by a query.
QueryTarget query = 2;
// A target specified by a set of document names.
DocumentsTarget documents = 3;
}
// When to start listening.
//
// If not specified, all matching Documents are returned before any
// subsequent changes.
oneof resume_type {
// A resume token from a prior [TargetChange][google.firestore.v1beta1.TargetChange] for an identical target.
//
// Using a resume token with a different target is unsupported and may fail.
bytes resume_token = 4;
// Start listening after a specific `read_time`.
//
// The client must know the state of matching documents at this time.
google.protobuf.Timestamp read_time = 11;
}
// A client provided target ID.
//
// If not set, the server will assign an ID for the target.
//
// Used for resuming a target without changing IDs. The IDs can either be
// client-assigned or be server-assigned in a previous stream. All targets
// with client provided IDs must be added before adding a target that needs
// a server-assigned id.
int32 target_id = 5;
// If the target should be removed once it is current and consistent.
bool once = 6;
}
// Targets being watched have changed.
message TargetChange {
// The type of change.
enum TargetChangeType {
// No change has occurred. Used only to send an updated `resume_token`.
NO_CHANGE = 0;
// The targets have been added.
ADD = 1;
// The targets have been removed.
REMOVE = 2;
// The targets reflect all changes committed before the targets were added
// to the stream.
//
// This will be sent after or with a `read_time` that is greater than or
// equal to the time at which the targets were added.
//
// Listeners can wait for this change if read-after-write semantics
// are desired.
CURRENT = 3;
// The targets have been reset, and a new initial state for the targets
// will be returned in subsequent changes.
//
// After the initial state is complete, `CURRENT` will be returned even
// if the target was previously indicated to be `CURRENT`.
RESET = 4;
}
// The type of change that occurred.
TargetChangeType target_change_type = 1;
// The target IDs of targets that have changed.
//
// If empty, the change applies to all targets.
//
// For `target_change_type=ADD`, the order of the target IDs matches the order
// of the requests to add the targets. This allows clients to unambiguously
// associate server-assigned target IDs with added targets.
//
// For other states, the order of the target IDs is not defined.
repeated int32 target_ids = 2;
// The error that resulted in this change, if applicable.
google.rpc.Status cause = 3;
// A token that can be used to resume the stream for the given `target_ids`,
// or all targets if `target_ids` is empty.
//
// Not set on every target change.
bytes resume_token = 4;
// The consistent `read_time` for the given `target_ids` (omitted when the
// target_ids are not at a consistent snapshot).
//
// The stream is guaranteed to send a `read_time` with `target_ids` empty
// whenever the entire stream reaches a new consistent snapshot. ADD,
// CURRENT, and RESET messages are guaranteed to (eventually) result in a
// new consistent snapshot (while NO_CHANGE and REMOVE messages are not).
//
// For a given stream, `read_time` is guaranteed to be monotonically
// increasing.
google.protobuf.Timestamp read_time = 6;
}
// The request for [Firestore.ListCollectionIds][google.firestore.v1beta1.Firestore.ListCollectionIds].
message ListCollectionIdsRequest {
// The parent document. In the format:
// `projects/{project_id}/databases/{database_id}/documents/{document_path}`.
// For example:
// `projects/my-project/databases/my-database/documents/chatrooms/my-chatroom`
string parent = 1;
// The maximum number of results to return.
int32 page_size = 2;
// A page token. Must be a value from
// [ListCollectionIdsResponse][google.firestore.v1beta1.ListCollectionIdsResponse].
string page_token = 3;
}
// The response from [Firestore.ListCollectionIds][google.firestore.v1beta1.Firestore.ListCollectionIds].
message ListCollectionIdsResponse {
// The collection ids.
repeated string collection_ids = 1;
// A page token that may be used to continue the list.
string next_page_token = 2;
}

View File

@ -0,0 +1,235 @@
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto3";
package google.firestore.v1beta1;
import "google/api/annotations.proto";
import "google/firestore/v1beta1/document.proto";
import "google/protobuf/wrappers.proto";
option csharp_namespace = "Google.Cloud.Firestore.V1Beta1";
option go_package = "google.golang.org/genproto/googleapis/firestore/v1beta1;firestore";
option java_multiple_files = true;
option java_outer_classname = "QueryProto";
option java_package = "com.google.firestore.v1beta1";
option objc_class_prefix = "GCFS";
option php_namespace = "Google\\Cloud\\Firestore\\V1beta1";
// A Firestore query.
message StructuredQuery {
// A selection of a collection, such as `messages as m1`.
message CollectionSelector {
// The collection ID.
// When set, selects only collections with this ID.
string collection_id = 2;
// When false, selects only collections that are immediate children of
// the `parent` specified in the containing `RunQueryRequest`.
// When true, selects all descendant collections.
bool all_descendants = 3;
}
// A filter.
message Filter {
// The type of filter.
oneof filter_type {
// A composite filter.
CompositeFilter composite_filter = 1;
// A filter on a document field.
FieldFilter field_filter = 2;
// A filter that takes exactly one argument.
UnaryFilter unary_filter = 3;
}
}
// A filter that merges multiple other filters using the given operator.
message CompositeFilter {
// A composite filter operator.
enum Operator {
// Unspecified. This value must not be used.
OPERATOR_UNSPECIFIED = 0;
// The results are required to satisfy each of the combined filters.
AND = 1;
}
// The operator for combining multiple filters.
Operator op = 1;
// The list of filters to combine.
// Must contain at least one filter.
repeated Filter filters = 2;
}
// A filter on a specific field.
message FieldFilter {
// A field filter operator.
enum Operator {
// Unspecified. This value must not be used.
OPERATOR_UNSPECIFIED = 0;
// Less than. Requires that the field come first in `order_by`.
LESS_THAN = 1;
// Less than or equal. Requires that the field come first in `order_by`.
LESS_THAN_OR_EQUAL = 2;
// Greater than. Requires that the field come first in `order_by`.
GREATER_THAN = 3;
// Greater than or equal. Requires that the field come first in
// `order_by`.
GREATER_THAN_OR_EQUAL = 4;
// Equal.
EQUAL = 5;
// Contains. Requires that the field is an array.
ARRAY_CONTAINS = 7;
}
// The field to filter by.
FieldReference field = 1;
// The operator to filter by.
Operator op = 2;
// The value to compare to.
Value value = 3;
}
// A filter with a single operand.
message UnaryFilter {
// A unary operator.
enum Operator {
// Unspecified. This value must not be used.
OPERATOR_UNSPECIFIED = 0;
// Test if a field is equal to NaN.
IS_NAN = 2;
// Test if an exprestion evaluates to Null.
IS_NULL = 3;
}
// The unary operator to apply.
Operator op = 1;
// The argument to the filter.
oneof operand_type {
// The field to which to apply the operator.
FieldReference field = 2;
}
}
// An order on a field.
message Order {
// The field to order by.
FieldReference field = 1;
// The direction to order by. Defaults to `ASCENDING`.
Direction direction = 2;
}
// A reference to a field, such as `max(messages.time) as max_time`.
message FieldReference {
string field_path = 2;
}
// The projection of document's fields to return.
message Projection {
// The fields to return.
//
// If empty, all fields are returned. To only return the name
// of the document, use `['__name__']`.
repeated FieldReference fields = 2;
}
// A sort direction.
enum Direction {
// Unspecified.
DIRECTION_UNSPECIFIED = 0;
// Ascending.
ASCENDING = 1;
// Descending.
DESCENDING = 2;
}
// The projection to return.
Projection select = 1;
// The collections to query.
repeated CollectionSelector from = 2;
// The filter to apply.
Filter where = 3;
// The order to apply to the query results.
//
// Firestore guarantees a stable ordering through the following rules:
//
// * Any field required to appear in `order_by`, that is not already
// specified in `order_by`, is appended to the order in field name order
// by default.
// * If an order on `__name__` is not specified, it is appended by default.
//
// Fields are appended with the same sort direction as the last order
// specified, or 'ASCENDING' if no order was specified. For example:
//
// * `SELECT * FROM Foo ORDER BY A` becomes
// `SELECT * FROM Foo ORDER BY A, __name__`
// * `SELECT * FROM Foo ORDER BY A DESC` becomes
// `SELECT * FROM Foo ORDER BY A DESC, __name__ DESC`
// * `SELECT * FROM Foo WHERE A > 1` becomes
// `SELECT * FROM Foo WHERE A > 1 ORDER BY A, __name__`
repeated Order order_by = 4;
// A starting point for the query results.
Cursor start_at = 7;
// A end point for the query results.
Cursor end_at = 8;
// The number of results to skip.
//
// Applies before limit, but after all other constraints. Must be >= 0 if
// specified.
int32 offset = 6;
// The maximum number of results to return.
//
// Applies after all other constraints.
// Must be >= 0 if specified.
google.protobuf.Int32Value limit = 5;
}
// A position in a query result set.
message Cursor {
// The values that represent a position, in the order they appear in
// the order by clause of a query.
//
// Can contain fewer values than specified in the order by clause.
repeated Value values = 1;
// If the position is just before or just after the given values, relative
// to the sort order defined by the query.
bool before = 2;
}

View File

@ -0,0 +1,214 @@
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto3";
package google.firestore.v1beta1;
import "google/api/annotations.proto";
import "google/firestore/v1beta1/common.proto";
import "google/firestore/v1beta1/document.proto";
import "google/protobuf/timestamp.proto";
option csharp_namespace = "Google.Cloud.Firestore.V1Beta1";
option go_package = "google.golang.org/genproto/googleapis/firestore/v1beta1;firestore";
option java_multiple_files = true;
option java_outer_classname = "WriteProto";
option java_package = "com.google.firestore.v1beta1";
option objc_class_prefix = "GCFS";
option php_namespace = "Google\\Cloud\\Firestore\\V1beta1";
// A write on a document.
message Write {
// The operation to execute.
oneof operation {
// A document to write.
Document update = 1;
// A document name to delete. In the format:
// `projects/{project_id}/databases/{database_id}/documents/{document_path}`.
string delete = 2;
// Applies a tranformation to a document.
// At most one `transform` per document is allowed in a given request.
// An `update` cannot follow a `transform` on the same document in a given
// request.
DocumentTransform transform = 6;
}
// The fields to update in this write.
//
// This field can be set only when the operation is `update`.
// If the mask is not set for an `update` and the document exists, any
// existing data will be overwritten.
// If the mask is set and the document on the server has fields not covered by
// the mask, they are left unchanged.
// Fields referenced in the mask, but not present in the input document, are
// deleted from the document on the server.
// The field paths in this mask must not contain a reserved field name.
DocumentMask update_mask = 3;
// An optional precondition on the document.
//
// The write will fail if this is set and not met by the target document.
Precondition current_document = 4;
}
// A transformation of a document.
message DocumentTransform {
// A transformation of a field of the document.
message FieldTransform {
// A value that is calculated by the server.
enum ServerValue {
// Unspecified. This value must not be used.
SERVER_VALUE_UNSPECIFIED = 0;
// The time at which the server processed the request, with millisecond
// precision.
REQUEST_TIME = 1;
}
// The path of the field. See [Document.fields][google.firestore.v1beta1.Document.fields] for the field path syntax
// reference.
string field_path = 1;
// The transformation to apply on the field.
oneof transform_type {
// Sets the field to the given server value.
ServerValue set_to_server_value = 2;
// Append the given elements in order if they are not already present in
// the current field value.
// If the field is not an array, or if the field does not yet exist, it is
// first set to the empty array.
//
// Equivalent numbers of different types (e.g. 3L and 3.0) are
// considered equal when checking if a value is missing.
// NaN is equal to NaN, and Null is equal to Null.
// If the input contains multiple equivalent values, only the first will
// be considered.
//
// The corresponding transform_result will be the null value.
ArrayValue append_missing_elements = 6;
// Remove all of the given elements from the array in the field.
// If the field is not an array, or if the field does not yet exist, it is
// set to the empty array.
//
// Equivalent numbers of the different types (e.g. 3L and 3.0) are
// considered equal when deciding whether an element should be removed.
// NaN is equal to NaN, and Null is equal to Null.
// This will remove all equivalent values if there are duplicates.
//
// The corresponding transform_result will be the null value.
ArrayValue remove_all_from_array = 7;
}
}
// The name of the document to transform.
string document = 1;
// The list of transformations to apply to the fields of the document, in
// order.
// This must not be empty.
repeated FieldTransform field_transforms = 2;
}
// The result of applying a write.
message WriteResult {
// The last update time of the document after applying the write. Not set
// after a `delete`.
//
// If the write did not actually change the document, this will be the
// previous update_time.
google.protobuf.Timestamp update_time = 1;
// The results of applying each [DocumentTransform.FieldTransform][google.firestore.v1beta1.DocumentTransform.FieldTransform], in the
// same order.
repeated Value transform_results = 2;
}
// A [Document][google.firestore.v1beta1.Document] has changed.
//
// May be the result of multiple [writes][google.firestore.v1beta1.Write], including deletes, that
// ultimately resulted in a new value for the [Document][google.firestore.v1beta1.Document].
//
// Multiple [DocumentChange][google.firestore.v1beta1.DocumentChange] messages may be returned for the same logical
// change, if multiple targets are affected.
message DocumentChange {
// The new state of the [Document][google.firestore.v1beta1.Document].
//
// If `mask` is set, contains only fields that were updated or added.
Document document = 1;
// A set of target IDs of targets that match this document.
repeated int32 target_ids = 5;
// A set of target IDs for targets that no longer match this document.
repeated int32 removed_target_ids = 6;
}
// A [Document][google.firestore.v1beta1.Document] has been deleted.
//
// May be the result of multiple [writes][google.firestore.v1beta1.Write], including updates, the
// last of which deleted the [Document][google.firestore.v1beta1.Document].
//
// Multiple [DocumentDelete][google.firestore.v1beta1.DocumentDelete] messages may be returned for the same logical
// delete, if multiple targets are affected.
message DocumentDelete {
// The resource name of the [Document][google.firestore.v1beta1.Document] that was deleted.
string document = 1;
// A set of target IDs for targets that previously matched this entity.
repeated int32 removed_target_ids = 6;
// The read timestamp at which the delete was observed.
//
// Greater or equal to the `commit_time` of the delete.
google.protobuf.Timestamp read_time = 4;
}
// A [Document][google.firestore.v1beta1.Document] has been removed from the view of the targets.
//
// Sent if the document is no longer relevant to a target and is out of view.
// Can be sent instead of a DocumentDelete or a DocumentChange if the server
// can not send the new value of the document.
//
// Multiple [DocumentRemove][google.firestore.v1beta1.DocumentRemove] messages may be returned for the same logical
// write or delete, if multiple targets are affected.
message DocumentRemove {
// The resource name of the [Document][google.firestore.v1beta1.Document] that has gone out of view.
string document = 1;
// A set of target IDs for targets that previously matched this document.
repeated int32 removed_target_ids = 2;
// The read timestamp at which the remove was observed.
//
// Greater or equal to the `commit_time` of the change/delete/remove.
google.protobuf.Timestamp read_time = 4;
}
// A digest of all the documents that match a given target.
message ExistenceFilter {
// The target ID to which this filter applies.
int32 target_id = 1;
// The total count of documents that match [target_id][google.firestore.v1beta1.ExistenceFilter.target_id].
//
// If different from the count of documents in the client that match, the
// client must manually determine which documents no longer match the target.
int32 count = 2;
}

View File

@ -0,0 +1,197 @@
/*!
* Copyright 2017 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
'use strict';
Object.defineProperty(exports, "__esModule", { value: true });
const logger_1 = require("./logger");
/*
* @module firestore/backoff
* @private
*
* Contains backoff logic to facilitate RPC error handling. This class derives
* its implementation from the Firestore Mobile Web Client.
*
* @see https://github.com/firebase/firebase-js-sdk/blob/master/packages/firestore/src/remote/backoff.ts
*/
/*!
* The default initial backoff time in milliseconds after an error.
* Set to 1s according to https://cloud.google.com/apis/design/errors.
*/
const DEFAULT_BACKOFF_INITIAL_DELAY_MS = 1000;
/*!
* The default maximum backoff time in milliseconds.
*/
const DEFAULT_BACKOFF_MAX_DELAY_MS = 60 * 1000;
/*!
* The default factor to increase the backup by after each failed attempt.
*/
const DEFAULT_BACKOFF_FACTOR = 1.5;
/*!
* The default jitter to distribute the backoff attempts by (0 means no
* randomization, 1.0 means +/-50% randomization).
*/
const DEFAULT_JITTER_FACTOR = 1.0;
/*!
* The timeout handler used by `ExponentialBackoff`.
*/
let delayExecution = setTimeout;
/**
* Allows overriding of the timeout handler used by the exponential backoff
* implementation. If not invoked, we default to `setTimeout()`.
*
* Used only in testing.
*
* @private
* @param {function} handler A handler than matches the API of `setTimeout()`.
*/
function setTimeoutHandler(handler) {
delayExecution = handler;
}
exports.setTimeoutHandler = setTimeoutHandler;
/**
* A helper for running delayed tasks following an exponential backoff curve
* between attempts.
*
* Each delay is made up of a "base" delay which follows the exponential
* backoff curve, and a "jitter" (+/- 50% by default) that is calculated and
* added to the base delay. This prevents clients from accidentally
* synchronizing their delays causing spikes of load to the backend.
*
* @private
*/
class ExponentialBackoff {
/**
* @param {number=} options.initialDelayMs Optional override for the initial
* retry delay.
* @param {number=} options.backoffFactor Optional override for the
* exponential backoff factor.
* @param {number=} options.maxDelayMs Optional override for the maximum
* retry delay.
* @param {number=} options.jitterFactor Optional override to control the
* jitter factor by which to randomize attempts (0 means no randomization,
* 1.0 means +/-50% randomization). It is suggested not to exceed this range.
*/
constructor(options) {
options = options || {};
/**
* The initial delay (used as the base delay on the first retry attempt).
* Note that jitter will still be applied, so the actual delay could be as
* little as 0.5*initialDelayMs (based on a jitter factor of 1.0).
*
* @type {number}
* @private
*/
this._initialDelayMs = options.initialDelayMs !== undefined ?
options.initialDelayMs :
DEFAULT_BACKOFF_INITIAL_DELAY_MS;
/**
* The multiplier to use to determine the extended base delay after each
* attempt.
* @type {number}
* @private
*/
this._backoffFactor = options.backoffFactor !== undefined ?
options.backoffFactor :
DEFAULT_BACKOFF_FACTOR;
/**
* The maximum base delay after which no further backoff is performed.
* Note that jitter will still be applied, so the actual delay could be as
* much as 1.5*maxDelayMs (based on a jitter factor of 1.0).
*
* @type {number}
* @private
*/
this._maxDelayMs = options.maxDelayMs !== undefined ?
options.maxDelayMs :
DEFAULT_BACKOFF_MAX_DELAY_MS;
/**
* The jitter factor that controls the random distribution of the backoff
* points.
*
* @type {number}
* @private
*/
this._jitterFactor = options.jitterFactor !== undefined ?
options.jitterFactor :
DEFAULT_JITTER_FACTOR;
/**
* The backoff delay of the current attempt.
* @type {number}
* @private
*/
this._currentBaseMs = 0;
}
/**
* Resets the backoff delay.
*
* The very next backoffAndWait() will have no delay. If it is called again
* (i.e. due to an error), initialDelayMs (plus jitter) will be used, and
* subsequent ones will increase according to the backoffFactor.
*
* @private
*/
reset() {
this._currentBaseMs = 0;
}
/**
* Resets the backoff delay to the maximum delay (e.g. for use after a
* RESOURCE_EXHAUSTED error).
*
* @private
*/
resetToMax() {
this._currentBaseMs = this._maxDelayMs;
}
/**
* Returns a promise that resolves after currentDelayMs, and increases the
* delay for any subsequent attempts.
*
* @private
* @return {Promise.<void>} A Promise that resolves when the current delay
* elapsed.
*/
backoffAndWait() {
// First schedule using the current base (which may be 0 and should be
// honored as such).
const delayWithJitterMs = this._currentBaseMs + this._jitterDelayMs();
if (this._currentBaseMs > 0) {
logger_1.logger('ExponentialBackoff.backoffAndWait', `Backing off for ${delayWithJitterMs} ms ` +
`(base delay: ${this._currentBaseMs} ms)`);
}
// Apply backoff factor to determine next delay and ensure it is within
// bounds.
this._currentBaseMs *= this._backoffFactor;
if (this._currentBaseMs < this._initialDelayMs) {
this._currentBaseMs = this._initialDelayMs;
}
if (this._currentBaseMs > this._maxDelayMs) {
this._currentBaseMs = this._maxDelayMs;
}
return new Promise(resolve => {
delayExecution(resolve, delayWithJitterMs);
});
}
/**
* Returns a randomized "jitter" delay based on the current base and jitter
* factor.
*
* @private
* @returns {number} The jitter to apply based on the current delay.
*/
_jitterDelayMs() {
return (Math.random() - 0.5) * this._jitterFactor * this._currentBaseMs;
}
}
exports.ExponentialBackoff = ExponentialBackoff;

View File

@ -0,0 +1,213 @@
/*!
* Copyright 2017 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
'use strict';
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
const is_1 = __importDefault(require("is"));
const validate_1 = require("./validate");
const validate = validate_1.createValidator();
/*!
* @module firestore/convert
* @private
*
* This module contains utility functions to convert
* `firestore.v1beta1.Documents` from Proto3 JSON to their equivalent
* representation in Protobuf JS. Protobuf JS is the only encoding supported by
* this client, and dependencies that use Proto3 JSON (such as the Google Cloud
* Functions SDK) are supported through this conversion and its usage in
* {@see Firestore#snapshot_}.
*/
/**
* Converts an ISO 8601 or google.protobuf.Timestamp proto into Protobuf JS.
*
* @private
* @param {*=} timestampValue - The value to convert.
* @param {string=} argumentName - The argument name to use in the error message
* if the conversion fails. If omitted, 'timestampValue' is used.
* @return {{nanos,seconds}|undefined} The value as expected by Protobuf JS or
* undefined if no input was provided.
*/
function convertTimestamp(timestampValue, argumentName) {
let timestampProto = undefined;
if (is_1.default.string(timestampValue)) {
let date = new Date(timestampValue);
let seconds = Math.floor(date.getTime() / 1000);
let nanos = 0;
if (timestampValue.length > 20) {
const nanoString = timestampValue.substring(20, timestampValue.length - 1);
const trailingZeroes = 9 - nanoString.length;
nanos = parseInt(nanoString, 10) * Math.pow(10, trailingZeroes);
}
if (isNaN(seconds) || isNaN(nanos)) {
argumentName = argumentName || 'timestampValue';
throw new Error(`Specify a valid ISO 8601 timestamp for "${argumentName}".`);
}
timestampProto = {
seconds: seconds || undefined,
nanos: nanos || undefined,
};
}
else if (is_1.default.defined(timestampValue)) {
validate.isObject('timestampValue', timestampValue);
timestampProto = {
seconds: timestampValue.seconds || undefined,
nanos: timestampValue.nanos || undefined,
};
}
return timestampProto;
}
/**
* Converts a Proto3 JSON 'bytesValue' field into Protobuf JS.
*
* @private
* @param {*} bytesValue - The value to convert.
* @return {Buffer} The value as expected by Protobuf JS.
*/
function convertBytes(bytesValue) {
if (typeof bytesValue === 'string') {
return Buffer.from(bytesValue, 'base64');
}
else {
return bytesValue;
}
}
/**
* Detects 'valueType' from a Proto3 JSON `firestore.v1beta1.Value` proto.
*
* @private
* @param {object} proto - The `firestore.v1beta1.Value` proto.
* @return {string} - The string value for 'valueType'.
*/
function detectValueType(proto) {
if (proto.valueType) {
return proto.valueType;
}
let detectedValues = [];
if (is_1.default.defined(proto.stringValue)) {
detectedValues.push('stringValue');
}
if (is_1.default.defined(proto.booleanValue)) {
detectedValues.push('booleanValue');
}
if (is_1.default.defined(proto.integerValue)) {
detectedValues.push('integerValue');
}
if (is_1.default.defined(proto.doubleValue)) {
detectedValues.push('doubleValue');
}
if (is_1.default.defined(proto.timestampValue)) {
detectedValues.push('timestampValue');
}
if (is_1.default.defined(proto.referenceValue)) {
detectedValues.push('referenceValue');
}
if (is_1.default.defined(proto.arrayValue)) {
detectedValues.push('arrayValue');
}
if (is_1.default.defined(proto.nullValue)) {
detectedValues.push('nullValue');
}
if (is_1.default.defined(proto.mapValue)) {
detectedValues.push('mapValue');
}
if (is_1.default.defined(proto.geoPointValue)) {
detectedValues.push('geoPointValue');
}
if (is_1.default.defined(proto.bytesValue)) {
detectedValues.push('bytesValue');
}
if (detectedValues.length !== 1) {
throw new Error(`Unable to infer type value fom '${JSON.stringify(proto)}'.`);
}
return detectedValues[0];
}
/**
* Converts a `firestore.v1beta1.Value` in Proto3 JSON encoding into the
* Protobuf JS format expected by this client.
*
* @private
* @param {object} fieldValue - The `firestore.v1beta1.Value` in Proto3 JSON
* format.
* @return {object} The `firestore.v1beta1.Value` in Protobuf JS format.
*/
function convertValue(fieldValue) {
let valueType = detectValueType(fieldValue);
switch (valueType) {
case 'timestampValue':
return {
timestampValue: convertTimestamp(fieldValue.timestampValue),
};
case 'bytesValue':
return {
bytesValue: convertBytes(fieldValue.bytesValue),
};
case 'arrayValue': {
let arrayValue = [];
if (is_1.default.array(fieldValue.arrayValue.values)) {
for (let value of fieldValue.arrayValue.values) {
arrayValue.push(convertValue(value));
}
}
return {
arrayValue: {
values: arrayValue,
},
};
}
case 'mapValue': {
let mapValue = {};
for (let prop in fieldValue.mapValue.fields) {
if (fieldValue.mapValue.fields.hasOwnProperty(prop)) {
mapValue[prop] = convertValue(fieldValue.mapValue.fields[prop]);
}
}
return {
mapValue: {
fields: mapValue,
},
};
}
default:
return fieldValue;
}
}
/**
* Converts a `firestore.v1beta1.Document` in Proto3 JSON encoding into the
* Protobuf JS format expected by this client. This conversion creates a copy of
* the underlying document.
*
* @private
* @param {object} document - The `firestore.v1beta1.Document` in Proto3 JSON
* format.
* @return {object} The `firestore.v1beta1.Document` in Protobuf JS format.
*/
function convertDocument(document) {
let result = {};
for (let prop in document) {
if (document.hasOwnProperty(prop)) {
result[prop] = convertValue(document[prop]);
}
}
return result;
}
module.exports = {
documentFromJson: convertDocument,
timestampFromJson: convertTimestamp,
valueFromJson: convertValue,
detectValueType: detectValueType,
};

View File

@ -0,0 +1,173 @@
/*!
* Copyright 2018 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
'use strict';
var __importStar = (this && this.__importStar) || function (mod) {
if (mod && mod.__esModule) return mod;
var result = {};
if (mod != null) for (var k in mod) if (Object.hasOwnProperty.call(mod, k)) result[k] = mod[k];
result["default"] = mod;
return result;
};
Object.defineProperty(exports, "__esModule", { value: true });
const is = __importStar(require("is"));
/**
* A DocumentChange represents a change to the documents matching a query.
* It contains the document affected and the type of change that occurred.
*
* @class
*/
class DocumentChange {
/**
* @private
* @hideconstructor
*
* @param {string} type - 'added' | 'removed' | 'modified'.
* @param {QueryDocumentSnapshot} document - The document.
* @param {number} oldIndex - The index in the documents array prior to this
* change.
* @param {number} newIndex - The index in the documents array after this
* change.
*/
constructor(type, document, oldIndex, newIndex) {
this._type = type;
this._document = document;
this._oldIndex = oldIndex;
this._newIndex = newIndex;
}
/**
* The type of change ('added', 'modified', or 'removed').
*
* @type {string}
* @name DocumentChange#type
* @readonly
*
* @example
* let query = firestore.collection('col').where('foo', '==', 'bar');
* let docsArray = [];
*
* let unsubscribe = query.onSnapshot(querySnapshot => {
* for (let change of querySnapshot.docChanges) {
* console.log(`Type of change is ${change.type}`);
* }
* });
*
* // Remove this listener.
* unsubscribe();
*/
get type() {
return this._type;
}
/**
* The document affected by this change.
*
* @type {QueryDocumentSnapshot}
* @name DocumentChange#doc
* @readonly
*
* @example
* let query = firestore.collection('col').where('foo', '==', 'bar');
*
* let unsubscribe = query.onSnapshot(querySnapshot => {
* for (let change of querySnapshot.docChanges) {
* console.log(change.doc.data());
* }
* });
*
* // Remove this listener.
* unsubscribe();
*/
get doc() {
return this._document;
}
/**
* The index of the changed document in the result set immediately prior to
* this DocumentChange (i.e. supposing that all prior DocumentChange objects
* have been applied). Is -1 for 'added' events.
*
* @type {number}
* @name DocumentChange#oldIndex
* @readonly
*
* @example
* let query = firestore.collection('col').where('foo', '==', 'bar');
* let docsArray = [];
*
* let unsubscribe = query.onSnapshot(querySnapshot => {
* for (let change of querySnapshot.docChanges) {
* if (change.oldIndex !== -1) {
* docsArray.splice(change.oldIndex, 1);
* }
* if (change.newIndex !== -1) {
* docsArray.splice(change.newIndex, 0, change.doc);
* }
* }
* });
*
* // Remove this listener.
* unsubscribe();
*/
get oldIndex() {
return this._oldIndex;
}
/**
* The index of the changed document in the result set immediately after
* this DocumentChange (i.e. supposing that all prior DocumentChange
* objects and the current DocumentChange object have been applied).
* Is -1 for 'removed' events.
*
* @type {number}
* @name DocumentChange#newIndex
* @readonly
*
* @example
* let query = firestore.collection('col').where('foo', '==', 'bar');
* let docsArray = [];
*
* let unsubscribe = query.onSnapshot(querySnapshot => {
* for (let change of querySnapshot.docChanges) {
* if (change.oldIndex !== -1) {
* docsArray.splice(change.oldIndex, 1);
* }
* if (change.newIndex !== -1) {
* docsArray.splice(change.newIndex, 0, change.doc);
* }
* }
* });
*
* // Remove this listener.
* unsubscribe();
*/
get newIndex() {
return this._newIndex;
}
/**
* Returns true if the data in this `DocumentChange` is equal to the provided
* value.
*
* @param {*} other The value to compare against.
* @return true if this `DocumentChange` is equal to the provided value.
*/
isEqual(other) {
if (this === other) {
return true;
}
return (is.instanceof(other, DocumentChange) && this._type === other._type &&
this._oldIndex === other._oldIndex &&
this._newIndex === other._newIndex &&
this._document.isEqual(other._document));
}
}
exports.DocumentChange = DocumentChange;

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,324 @@
/*!
* Copyright 2018 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
'use strict';
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
const deep_equal_1 = __importDefault(require("deep-equal"));
const validate_1 = require("./validate");
const firestore_proto_api_1 = require("../protos/firestore_proto_api");
var api = firestore_proto_api_1.google.firestore.v1beta1;
const validate = validate_1.createValidator();
/**
* Sentinel values that can be used when writing documents with set(), create()
* or update().
*
* @class
*/
class FieldValue {
/**
* @private
* @hideconstructor
*/
constructor() { }
/**
* Returns a sentinel for use with update() or set() with {merge:true} to mark
* a field for deletion.
*
* @returns {FieldValue} The sentinel value to use in your objects.
*
* @example
* let documentRef = firestore.doc('col/doc');
* let data = { a: 'b', c: 'd' };
*
* documentRef.set(data).then(() => {
* return documentRef.update({a: Firestore.FieldValue.delete()});
* }).then(() => {
* // Document now only contains { c: 'd' }
* });
*/
static delete() {
return DeleteTransform.DELETE_SENTINEL;
}
/**
* Returns a sentinel used with set(), create() or update() to include a
* server-generated timestamp in the written data.
*
* @return {FieldValue} The FieldValue sentinel for use in a call to set(),
* create() or update().
*
* @example
* let documentRef = firestore.doc('col/doc');
*
* documentRef.set({
* time: Firestore.FieldValue.serverTimestamp()
* }).then(() => {
* return documentRef.get();
* }).then(doc => {
* console.log(`Server time set to ${doc.get('time')}`);
* });
*/
static serverTimestamp() {
return ServerTimestampTransform.SERVER_TIMESTAMP_SENTINEL;
}
/**
* Returns a special value that can be used with set(), create() or update()
* that tells the server to union the given elements with any array value that
* already exists on the server. Each specified element that doesn't already
* exist in the array will be added to the end. If the field being modified is
* not already an array it will be overwritten with an array containing
* exactly the specified elements.
*
* @param {...*} elements The elements to union into the array.
* @return {FieldValue} The FieldValue sentinel for use in a call to set(),
* create() or update().
*
* @example
* let documentRef = firestore.doc('col/doc');
*
* documentRef.update(
* 'array', Firestore.FieldValue.arrayUnion('foo')
* ).then(() => {
* return documentRef.get();
* }).then(doc => {
* // doc.get('array') contains field 'foo'
* });
*/
static arrayUnion(...elements) {
validate.minNumberOfArguments('FieldValue.arrayUnion', arguments, 1);
return new ArrayUnionTransform(elements);
}
/**
* Returns a special value that can be used with set(), create() or update()
* that tells the server to remove the given elements from any array value
* that already exists on the server. All instances of each element specified
* will be removed from the array. If the field being modified is not already
* an array it will be overwritten with an empty array.
*
* @param {...*} elements The elements to remove from the array.
* @return {FieldValue} The FieldValue sentinel for use in a call to set(),
* create() or update().
*
* @example
* let documentRef = firestore.doc('col/doc');
*
* documentRef.update(
* 'array', Firestore.FieldValue.arrayRemove('foo')
* ).then(() => {
* return documentRef.get();
* }).then(doc => {
* // doc.get('array') no longer contains field 'foo'
* });
*/
static arrayRemove(...elements) {
validate.minNumberOfArguments('FieldValue.arrayRemove', arguments, 1);
return new ArrayRemoveTransform(elements);
}
/**
* Returns true if this `FieldValue` is equal to the provided value.
*
* @param {*} other The value to compare against.
* @return {boolean} true if this `FieldValue` is equal to the provided value.
*/
isEqual(other) {
return this === other;
}
}
exports.FieldValue = FieldValue;
/**
* An internal interface shared by all field transforms.
*
* A 'FieldTransform` subclass should implement '.includeInDocumentMask',
* '.includeInDocumentTransform' and 'toProto' (if '.includeInDocumentTransform'
* is 'true').
*
* @private
* @abstract
*/
class FieldTransform extends FieldValue {
}
exports.FieldTransform = FieldTransform;
/**
* A transform that deletes a field from a Firestore document.
*
* @private
*/
class DeleteTransform extends FieldTransform {
constructor() {
super();
}
/**
* Deletes are included in document masks.
*/
get includeInDocumentMask() {
return true;
}
/**
* Deletes are are omitted from document transforms.
*/
get includeInDocumentTransform() {
return false;
}
get methodName() {
return 'FieldValue.delete';
}
validate() {
return true;
}
toProto(serializer, fieldPath) {
throw new Error('FieldValue.delete() should not be included in a FieldTransform');
}
}
/**
* Sentinel value for a field delete.
*/
DeleteTransform.DELETE_SENTINEL = new DeleteTransform();
exports.DeleteTransform = DeleteTransform;
/**
* A transform that sets a field to the Firestore server time.
*
* @private
*/
class ServerTimestampTransform extends FieldTransform {
constructor() {
super();
}
/**
* Server timestamps are omitted from document masks.
*
* @private
*/
get includeInDocumentMask() {
return false;
}
/**
* Server timestamps are included in document transforms.
*
* @private
*/
get includeInDocumentTransform() {
return true;
}
get methodName() {
return 'FieldValue.serverTimestamp';
}
validate() {
return true;
}
toProto(serializer, fieldPath) {
return {
fieldPath: fieldPath.formattedName,
setToServerValue: api.DocumentTransform.FieldTransform.ServerValue.REQUEST_TIME,
};
}
}
/**
* Sentinel value for a server timestamp.
*
* @private
*/
ServerTimestampTransform.SERVER_TIMESTAMP_SENTINEL = new ServerTimestampTransform();
/**
* Transforms an array value via a union operation.
*
* @private
*/
class ArrayUnionTransform extends FieldTransform {
constructor(elements) {
super();
this.elements = elements;
}
/**
* Array transforms are omitted from document masks.
*/
get includeInDocumentMask() {
return false;
}
/**
* Array transforms are included in document transforms.
*/
get includeInDocumentTransform() {
return true;
}
get methodName() {
return 'FieldValue.arrayUnion';
}
validate(validator) {
let valid = true;
for (let i = 0; valid && i < this.elements.length; ++i) {
valid = validator.isArrayElement(i, this.elements[i], { allowDeletes: 'none', allowTransforms: false });
}
return valid;
}
toProto(serializer, fieldPath) {
const encodedElements = serializer.encodeValue(this.elements).arrayValue;
return {
fieldPath: fieldPath.formattedName,
appendMissingElements: encodedElements
};
}
isEqual(other) {
return (this === other ||
(other instanceof ArrayUnionTransform &&
deep_equal_1.default(this.elements, other.elements, { strict: true })));
}
}
/**
* Transforms an array value via a remove operation.
*
* @private
*/
class ArrayRemoveTransform extends FieldTransform {
constructor(elements) {
super();
this.elements = elements;
}
/**
* Array transforms are omitted from document masks.
*/
get includeInDocumentMask() {
return false;
}
/**
* Array transforms are included in document transforms.
*/
get includeInDocumentTransform() {
return true;
}
get methodName() {
return 'FieldValue.arrayRemove';
}
validate(validator) {
let valid = true;
for (let i = 0; valid && i < this.elements.length; ++i) {
valid = validator.isArrayElement(i, this.elements[i], { allowDeletes: 'none', allowTransforms: false });
}
return valid;
}
toProto(serializer, fieldPath) {
const encodedElements = serializer.encodeValue(this.elements).arrayValue;
return {
fieldPath: fieldPath.formattedName,
removeAllFromArray: encodedElements
};
}
isEqual(other) {
return (this === other ||
(other instanceof ArrayRemoveTransform &&
deep_equal_1.default(this.elements, other.elements, { strict: true })));
}
}

View File

@ -0,0 +1,108 @@
/*!
* Copyright 2018 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
'use strict';
Object.defineProperty(exports, "__esModule", { value: true });
const validate_1 = require("./validate");
const validate = validate_1.createValidator();
/**
* An immutable object representing a geographic location in Firestore. The
* location is represented as a latitude/longitude pair.
*
* @class
*/
class GeoPoint {
/**
* Creates a [GeoPoint]{@link GeoPoint}.
*
* @param {number} latitude The latitude as a number between -90 and 90.
* @param {number} longitude The longitude as a number between -180 and 180.
*
* @example
* let data = {
* google: new Firestore.GeoPoint(37.422, 122.084)
* };
*
* firestore.doc('col/doc').set(data).then(() => {
* console.log(`Location is ${data.google.latitude}, ` +
* `${data.google.longitude}`);
* });
*/
constructor(latitude, longitude) {
validate.isNumber('latitude', latitude, -90, 90);
validate.isNumber('longitude', longitude, -180, 180);
this._latitude = latitude;
this._longitude = longitude;
}
/**
* The latitude as a number between -90 and 90.
*
* @type {number}
* @name GeoPoint#latitude
* @readonly
*/
get latitude() {
return this._latitude;
}
/**
* The longitude as a number between -180 and 180.
*
* @type {number}
* @name GeoPoint#longitude
* @readonly
*/
get longitude() {
return this._longitude;
}
/**
* Returns a string representation for this GeoPoint.
*
* @return {string} The string representation.
*/
toString() {
return `GeoPoint { latitude: ${this.latitude}, longitude: ${this.longitude} }`;
}
/**
* Returns true if this `GeoPoint` is equal to the provided value.
*
* @param {*} other The value to compare against.
* @return {boolean} true if this `GeoPoint` is equal to the provided value.
*/
isEqual(other) {
return (this === other ||
(other instanceof GeoPoint && this.latitude === other.latitude &&
this.longitude === other.longitude));
}
/**
* Converts the GeoPoint to a google.type.LatLng proto.
* @private
*/
toProto() {
return {
geoPointValue: {
latitude: this.latitude,
longitude: this.longitude,
}
};
}
/**
* Converts a google.type.LatLng proto to its GeoPoint representation.
* @private
*/
static fromProto(proto) {
return new GeoPoint(proto.latitude || 0, proto.longitude || 0);
}
}
exports.GeoPoint = GeoPoint;

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,60 @@
/*!
* Copyright 2018 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
'use strict';
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
const util_1 = __importDefault(require("util"));
const validate_1 = require("./validate");
const validate = validate_1.createValidator();
/*! The Firestore library version */
let libVersion;
/*! The external function used to emit logs. */
let logFunction = (msg) => { };
/**
* Log function to use for debug output. By default, we don't perform any
* logging.
*
* @private
*/
function logger(methodName, requestTag, logMessage, ...additionalArgs) {
requestTag = requestTag || '#####';
const formattedMessage = util_1.default.format(logMessage, ...additionalArgs);
const time = new Date().toISOString();
logFunction(`Firestore (${libVersion}) ${time} ${requestTag} [${methodName}]: ` +
formattedMessage);
}
exports.logger = logger;
/**
* Sets the log function for all active Firestore instances.
*
* @private
*/
function setLogFunction(logger) {
validate.isFunction('logger', logger);
logFunction = logger;
}
exports.setLogFunction = setLogFunction;
/**
* Sets the log function for all active Firestore instances.
*
* @private
*/
function setLibVersion(version) {
libVersion = version;
}
exports.setLibVersion = setLibVersion;

View File

@ -0,0 +1,238 @@
/*!
* Copyright 2017 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
'use strict';
var __importStar = (this && this.__importStar) || function (mod) {
if (mod && mod.__esModule) return mod;
var result = {};
if (mod != null) for (var k in mod) if (Object.hasOwnProperty.call(mod, k)) result[k] = mod[k];
result["default"] = mod;
return result;
};
Object.defineProperty(exports, "__esModule", { value: true });
const is = __importStar(require("is"));
const convert_1 = require("./convert");
const path_1 = require("./path");
const validate_1 = require("./validate");
/*!
* The type order as defined by the backend.
*/
var TypeOrder;
(function (TypeOrder) {
TypeOrder[TypeOrder["NULL"] = 0] = "NULL";
TypeOrder[TypeOrder["BOOLEAN"] = 1] = "BOOLEAN";
TypeOrder[TypeOrder["NUMBER"] = 2] = "NUMBER";
TypeOrder[TypeOrder["TIMESTAMP"] = 3] = "TIMESTAMP";
TypeOrder[TypeOrder["STRING"] = 4] = "STRING";
TypeOrder[TypeOrder["BLOB"] = 5] = "BLOB";
TypeOrder[TypeOrder["REF"] = 6] = "REF";
TypeOrder[TypeOrder["GEO_POINT"] = 7] = "GEO_POINT";
TypeOrder[TypeOrder["ARRAY"] = 8] = "ARRAY";
TypeOrder[TypeOrder["OBJECT"] = 9] = "OBJECT";
})(TypeOrder || (TypeOrder = {}));
/*!
* @private
*/
function typeOrder(val) {
const valueType = convert_1.detectValueType(val);
switch (valueType) {
case 'nullValue':
return TypeOrder.NULL;
case 'integerValue':
return TypeOrder.NUMBER;
case 'doubleValue':
return TypeOrder.NUMBER;
case 'stringValue':
return TypeOrder.STRING;
case 'booleanValue':
return TypeOrder.BOOLEAN;
case 'arrayValue':
return TypeOrder.ARRAY;
case 'timestampValue':
return TypeOrder.TIMESTAMP;
case 'geoPointValue':
return TypeOrder.GEO_POINT;
case 'bytesValue':
return TypeOrder.BLOB;
case 'referenceValue':
return TypeOrder.REF;
case 'mapValue':
return TypeOrder.OBJECT;
default:
throw validate_1.customObjectError(val);
}
}
/*!
* @private
*/
function primitiveComparator(left, right) {
if (left < right) {
return -1;
}
if (left > right) {
return 1;
}
return 0;
}
exports.primitiveComparator = primitiveComparator;
/*!
* Utility function to compare doubles (using Firestore semantics for NaN).
* @private
*/
function compareNumbers(left, right) {
if (left < right) {
return -1;
}
if (left > right) {
return 1;
}
if (left === right) {
return 0;
}
// one or both are NaN.
if (isNaN(left)) {
return isNaN(right) ? 0 : -1;
}
return 1;
}
/*!
* @private
*/
function compareNumberProtos(left, right) {
let leftValue, rightValue;
if (left.integerValue !== undefined) {
leftValue = Number(left.integerValue);
}
else {
leftValue = Number(left.doubleValue);
}
if (right.integerValue !== undefined) {
rightValue = Number(right.integerValue);
}
else {
rightValue = Number(right.doubleValue);
}
return compareNumbers(leftValue, rightValue);
}
/*!
* @private
*/
function compareTimestamps(left, right) {
const seconds = primitiveComparator(left.seconds || 0, right.seconds || 0);
if (seconds !== 0) {
return seconds;
}
return primitiveComparator(left.nanos || 0, right.nanos || 0);
}
/*!
* @private
*/
function compareBlobs(left, right) {
if (!is.instanceof(left, Buffer) || !is.instanceof(right, Buffer)) {
throw new Error('Blobs can only be compared if they are Buffers.');
}
return Buffer.compare(left, right);
}
/*!
* @private
*/
function compareReferenceProtos(left, right) {
const leftPath = path_1.ResourcePath.fromSlashSeparatedString(left.referenceValue);
const rightPath = path_1.ResourcePath.fromSlashSeparatedString(right.referenceValue);
return leftPath.compareTo(rightPath);
}
/*!
* @private
*/
function compareGeoPoints(left, right) {
return (primitiveComparator(left.latitude || 0, right.latitude || 0) ||
primitiveComparator(left.longitude || 0, right.longitude || 0));
}
/*!
* @private
*/
function compareArrays(left, right) {
for (let i = 0; i < left.length && i < right.length; i++) {
const valueComparison = compare(left[i], right[i]);
if (valueComparison !== 0) {
return valueComparison;
}
}
// If all the values matched so far, just check the length.
return primitiveComparator(left.length, right.length);
}
/*!
* @private
*/
function compareObjects(left, right) {
// This requires iterating over the keys in the object in order and doing a
// deep comparison.
const leftKeys = Object.keys(left);
const rightKeys = Object.keys(right);
leftKeys.sort();
rightKeys.sort();
for (let i = 0; i < leftKeys.length && i < rightKeys.length; i++) {
const keyComparison = primitiveComparator(leftKeys[i], rightKeys[i]);
if (keyComparison !== 0) {
return keyComparison;
}
const key = leftKeys[i];
const valueComparison = compare(left[key], right[key]);
if (valueComparison !== 0) {
return valueComparison;
}
}
// If all the keys matched so far, just check the length.
return primitiveComparator(leftKeys.length, rightKeys.length);
}
/*!
* @private
*/
function compare(left, right) {
// First compare the types.
const leftType = typeOrder(left);
const rightType = typeOrder(right);
const typeComparison = primitiveComparator(leftType, rightType);
if (typeComparison !== 0) {
return typeComparison;
}
// So they are the same type.
switch (leftType) {
case TypeOrder.NULL:
// Nulls are all equal.
return 0;
case TypeOrder.BOOLEAN:
return primitiveComparator(left.booleanValue, right.booleanValue);
case TypeOrder.STRING:
return primitiveComparator(left.stringValue, right.stringValue);
case TypeOrder.NUMBER:
return compareNumberProtos(left, right);
case TypeOrder.TIMESTAMP:
return compareTimestamps(left.timestampValue, right.timestampValue);
case TypeOrder.BLOB:
return compareBlobs(left.bytesValue, right.bytesValue);
case TypeOrder.REF:
return compareReferenceProtos(left, right);
case TypeOrder.GEO_POINT:
return compareGeoPoints(left.geoPointValue, right.geoPointValue);
case TypeOrder.ARRAY:
return compareArrays(left.arrayValue.values || [], right.arrayValue.values || []);
case TypeOrder.OBJECT:
return compareObjects(left.mapValue.fields || {}, right.mapValue.fields || {});
default:
throw new Error(`Encountered unknown type order: ${leftType}`);
}
}
exports.compare = compare;

View File

@ -0,0 +1,510 @@
/*!
* Copyright 2017 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
'use strict';
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
const is_1 = __importDefault(require("is"));
const validate_1 = require("./validate");
const validate = validate_1.createValidator();
/*!
* A regular expression to verify an absolute Resource Path in Firestore. It
* extracts the project ID, the database name and the relative resource path
* if available.
*
* @type {RegExp}
*/
const RESOURCE_PATH_RE =
// Note: [\s\S] matches all characters including newlines.
/^projects\/([^/]*)\/databases\/([^/]*)(?:\/documents\/)?([\s\S]*)$/;
/*!
* A regular expression to verify whether a field name can be passed to the
* backend without escaping.
*
* @type {RegExp}
*/
const UNESCAPED_FIELD_NAME_RE = /^[_a-zA-Z][_a-zA-Z0-9]*$/;
/*!
* A regular expression to verify field paths that are passed to the API as
* strings. Field paths that do not match this expression have to be provided
* as a [FieldPath]{@link FieldPath} object.
*
* @type {RegExp}
*/
const FIELD_PATH_RE = /^[^*~/[\]]+$/;
/**
* An abstract class representing a Firestore path.
*
* Subclasses have to implement `split()` and `canonicalString()`.
*
* @private
* @class
*/
class Path {
/**
* Creates a new Path with the given segments.
*
* @private
* @hideconstructor
* @param {string[]} segments - Sequence of parts of a path.
*/
constructor(segments) {
this.segments = segments;
}
/**
* String representation as expected by the proto API.
*
* @private
* @type {string}
*/
get formattedName() {
return this.canonicalString();
}
/**
* Create a child path beneath the current level.
*
* @private
* @param {string|T} relativePath - Relative path to append to the current
* path.
* @returns {T} The new path.
* @template T
*/
append(relativePath) {
if (relativePath instanceof Path) {
return this.construct(this.segments.concat(relativePath.segments));
}
return this.construct(this.segments.concat(this.split(relativePath)));
}
/**
* Returns the path of the parent node.
*
* @private
* @returns {T|null} The new path or null if we are already at the root.
* @returns {T} The new path.
* @template T
*/
parent() {
if (this.segments.length === 0) {
return null;
}
return this.construct(this.segments.slice(0, this.segments.length - 1));
}
/**
* Checks whether the current path is a prefix of the specified path.
*
* @private
* @param {Path} other - The path to check against.
* @returns {boolean} 'true' iff the current path is a prefix match with
* 'other'.
*/
isPrefixOf(other) {
if (other.segments.length < this.segments.length) {
return false;
}
for (let i = 0; i < this.segments.length; i++) {
if (this.segments[i] !== other.segments[i]) {
return false;
}
}
return true;
}
/**
* Returns a string representation of this path.
*
* @private
* @returns {string} A string representing this path.
*/
toString() {
return this.formattedName;
}
/**
* Compare the current path against another Path object.
*
* @private
* @param {Path} other - The path to compare to.
* @returns {number} -1 if current < other, 1 if current > other, 0 if equal
*/
compareTo(other) {
const len = Math.min(this.segments.length, other.segments.length);
for (let i = 0; i < len; i++) {
if (this.segments[i] < other.segments[i]) {
return -1;
}
if (this.segments[i] > other.segments[i]) {
return 1;
}
}
if (this.segments.length < other.segments.length) {
return -1;
}
if (this.segments.length > other.segments.length) {
return 1;
}
return 0;
}
/**
* Returns a copy of the underlying segments.
*
* @private
* @returns {Array.<string>} A copy of the segments that make up this path.
*/
toArray() {
return this.segments.slice();
}
/**
* Returns true if this `Path` is equal to the provided value.
*
* @private
* @param {*} other The value to compare against.
* @return {boolean} true if this `Path` is equal to the provided value.
*/
isEqual(other) {
return (this === other ||
(is_1.default.instanceof(other, this.constructor) &&
this.compareTo(other) === 0));
}
}
/**
* A slash-separated path for navigating resources (documents and collections)
* within Firestore.
*
* @private
* @class
*/
class ResourcePath extends Path {
/**
* Constructs a Firestore Resource Path.
*
* @private
* @hideconstructor
*
* @param {string} projectId - The Firestore project id.
* @param {string} databaseId - The Firestore database id.
* @param {string[]?} segments - Sequence of names of the parts of
* the path.
*/
constructor(projectId, databaseId, segments) {
const elements = segments instanceof Array ?
segments :
Array.prototype.slice.call(arguments, 2);
super(elements);
this.projectId = projectId;
this.databaseId = databaseId;
}
/**
* String representation of the path relative to the database root.
*
* @private
* @type {string}
*/
get relativeName() {
return this.segments.join('/');
}
/**
* Indicates whether this ResourcePath points to a document.
*
* @private
* @type {boolean}
*/
get isDocument() {
return this.segments.length > 0 && this.segments.length % 2 === 0;
}
/**
* Indicates whether this ResourcePath points to a collection.
*
* @private
* @type {boolean}
*/
get isCollection() {
return this.segments.length % 2 === 1;
}
/**
* The last component of the path.
*
* @private
* @type {string|null}
*/
get id() {
if (this.segments.length > 0) {
return this.segments[this.segments.length - 1];
}
return null;
}
/**
* Returns true if the given string can be used as a relative or absolute
* resource path.
*
* @private
* @param {string} resourcePath - The path to validate.
* @throws if the string can't be used as a resource path.
* @returns {boolean} 'true' when the path is valid.
*/
static validateResourcePath(resourcePath) {
if (!is_1.default.string(resourcePath) || resourcePath === '') {
throw new Error(`Path must be a non-empty string.`);
}
if (resourcePath.indexOf('//') >= 0) {
throw new Error('Paths must not contain //.');
}
return true;
}
/**
* Creates a resource path from an absolute Firestore path.
*
* @private
* @param {string} absolutePath - A string representation of a Resource Path.
* @returns {ResourcePath} The new ResourcePath.
*/
static fromSlashSeparatedString(absolutePath) {
const elements = RESOURCE_PATH_RE.exec(absolutePath);
if (elements) {
const project = elements[1];
const database = elements[2];
const path = elements[3];
return new ResourcePath(project, database).append(path);
}
throw new Error(`Resource name '${absolutePath}' is not valid.`);
}
/**
* Splits a string into path segments, using slashes as separators.
*
* @private
* @override
* @param {string} relativePath - The path to split.
* @returns {Array.<string>} - The split path segments.
*/
split(relativePath) {
// We may have an empty segment at the beginning or end if they had a
// leading or trailing slash (which we allow).
return relativePath.split('/').filter(segment => segment.length > 0);
}
/**
* String representation of a ResourcePath as expected by the API.
*
* @private
* @override
* @returns {string} The representation as expected by the API.
*/
canonicalString() {
let components = [
'projects',
this.projectId,
'databases',
this.databaseId,
];
if (this.segments.length > 0) {
components = components.concat('documents', this.segments);
}
return components.join('/');
}
/**
* Constructs a new instance of ResourcePath. We need this instead of using
* the normal constructor because polymorphic 'this' doesn't work on static
* methods.
*
* @private
* @override
* @param {Array.<string>} segments - Sequence of names of the parts of the
* path.
* @returns {ResourcePath} The newly created ResourcePath.
*/
construct(segments) {
return new ResourcePath(this.projectId, this.databaseId, segments);
}
/**
* Compare the current path against another ResourcePath object.
*
* @private
* @override
* @param {ResourcePath} other - The path to compare to.
* @returns {number} -1 if current < other, 1 if current > other, 0 if equal
*/
compareTo(other) {
// Ignore DocumentReference with {{projectId}} placeholders and assume that
// the resolved IDs match the provided ResourcePath. We could alternatively
// try to resolve the Project ID here, but this is asynchronous as it
// requires Disk I/O.
if (this.projectId !== '{{projectId}}' &&
other.projectId !== '{{projectId}}') {
if (this.projectId < other.projectId) {
return -1;
}
if (this.projectId > other.projectId) {
return 1;
}
}
if (this.databaseId < other.databaseId) {
return -1;
}
if (this.databaseId > other.databaseId) {
return 1;
}
return super.compareTo(other);
}
/**
* Converts this ResourcePath to the Firestore Proto representation.
*
* @private
*/
toProto() {
return {
referenceValue: this.formattedName,
};
}
}
exports.ResourcePath = ResourcePath;
/**
* A dot-separated path for navigating sub-objects within a document.
*
* @class
*/
class FieldPath extends Path {
/**
* Constructs a Firestore Field Path.
*
* @param {...string|string[]} segments - Sequence of field names that form
* this path.
*
* @example
* let query = firestore.collection('col');
* let fieldPath = new FieldPath('f.o.o', 'bar');
*
* query.where(fieldPath, '==', 42).get().then(snapshot => {
* snapshot.forEach(document => {
* console.log(`Document contains {'f.o.o' : {'bar' : 42}}`);
* });
* });
*/
constructor(segments) {
validate.minNumberOfArguments('FieldPath', arguments, 1);
const elements = segments instanceof Array ?
segments :
Array.prototype.slice.call(arguments);
for (let i = 0; i < elements.length; ++i) {
validate.isString(i, elements[i]);
if (elements[i].length === 0) {
throw new Error(`Argument at index ${i} should not be empty.`);
}
}
super(elements);
}
/**
* A special FieldPath value to refer to the ID of a document. It can be used
* in queries to sort or filter by the document ID.
*
* @returns {FieldPath}
*/
static documentId() {
return FieldPath._DOCUMENT_ID;
}
/**
* Returns true if the provided value can be used as a field path argument.
*
* @private
* @param {string|FieldPath} fieldPath - The value to verify.
* @throws if the string can't be used as a field path.
* @returns {boolean} 'true' when the path is valid.
*/
static validateFieldPath(fieldPath) {
if (!(fieldPath instanceof FieldPath)) {
if (!is_1.default.string(fieldPath)) {
throw validate_1.customObjectError(fieldPath);
}
if (fieldPath.indexOf('..') >= 0) {
throw new Error(`Paths must not contain '..' in them.`);
}
if (fieldPath.startsWith('.') || fieldPath.endsWith('.')) {
throw new Error(`Paths must not start or end with '.'.`);
}
if (!FIELD_PATH_RE.test(fieldPath)) {
throw new Error(`Paths can't be empty and must not contain '*~/[]'.`);
}
}
return true;
}
/**
* Turns a field path argument into a [FieldPath]{@link FieldPath}.
* Supports FieldPaths as input (which are passed through) and dot-separated
* strings.
*
* @private
* @param {string|FieldPath} fieldPath - The FieldPath to create.
* @returns {FieldPath} A field path representation.
*/
static fromArgument(fieldPath) {
// validateFieldPath() is used in all public API entry points to validate
// that fromArgument() is only called with a Field Path or a string.
return fieldPath instanceof FieldPath ? fieldPath :
new FieldPath(fieldPath.split('.'));
}
/**
* String representation of a FieldPath as expected by the API.
*
* @private
* @override
* @returns {string} The representation as expected by the API.
*/
canonicalString() {
return this.segments
.map(str => {
return UNESCAPED_FIELD_NAME_RE.test(str) ?
str :
'`' + str.replace('\\', '\\\\').replace('`', '\\`') + '`';
})
.join('.');
}
/**
* Splits a string into path segments, using dots as separators.
*
* @private
* @override
* @param {string} fieldPath - The path to split.
* @returns {Array.<string>} - The split path segments.
*/
split(fieldPath) {
return fieldPath.split('.');
}
/**
* Constructs a new instance of FieldPath. We need this instead of using
* the normal constructor because polymorphic 'this' doesn't work on static
* methods.
*
* @private
* @override
* @param {Array.<string>} segments - Sequence of field names.
* @returns {ResourcePath} The newly created FieldPath.
*/
construct(segments) {
return new FieldPath(segments);
}
/**
* Returns true if this `FieldPath` is equal to the provided value.
*
* @param {*} other The value to compare against.
* @return {boolean} true if this `FieldPath` is equal to the provided value.
*/
isEqual(other) {
return super.isEqual(other);
}
}
/**
* A special sentinel value to refer to the ID of a document.
*
* @private
*/
FieldPath._DOCUMENT_ID = new FieldPath('__name__');
exports.FieldPath = FieldPath;

View File

@ -0,0 +1,124 @@
/*!
* Copyright 2018 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
'use strict';
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
const assert_1 = __importDefault(require("assert"));
/**
* An auto-resizing pool that distributes concurrent operations over multiple
* clients of type `T`.
*
* ClientPool is used within Firestore to manage a pool of GAPIC clients and
* automatically initializes multiple clients if we issue more than 100
* concurrent operations.
*
* @private
*/
class ClientPool {
/**
* @param concurrentOperationLimit - The number of operations that each client
* can handle.
* @param clientFactory - A factory function called as needed when new clients
* are required.
*/
constructor(concurrentOperationLimit, clientFactory) {
this.concurrentOperationLimit = concurrentOperationLimit;
this.clientFactory = clientFactory;
/** Stores each active clients and how many operations it has outstanding. */
this.activeClients = new Map();
}
/**
* Returns an already existing client if it has less than the maximum number
* of concurrent operations or initializes and returns a new client.
*/
acquire() {
let selectedClient = null;
let selectedRequestCount = 0;
this.activeClients.forEach((requestCount, client) => {
if (!selectedClient && requestCount < this.concurrentOperationLimit) {
selectedClient = client;
selectedRequestCount = requestCount;
}
});
if (!selectedClient) {
selectedClient = this.clientFactory();
assert_1.default(!this.activeClients.has(selectedClient), 'The provided client factory returned an existing instance');
}
this.activeClients.set(selectedClient, selectedRequestCount + 1);
return selectedClient;
}
/**
* Reduces the number of operations for the provided client, potentially
* removing it from the pool of active clients.
*/
release(client) {
let requestCount = this.activeClients.get(client) || 0;
assert_1.default(requestCount > 0, 'No active request');
requestCount = requestCount - 1;
this.activeClients.set(client, requestCount);
if (requestCount === 0) {
this.garbageCollect();
}
}
/**
* The number of currently registered clients.
*
* @return Number of currently registered clients.
*/
// Visible for testing.
get size() {
return this.activeClients.size;
}
/**
* Runs the provided operation in this pool. This function may create an
* additional client if all existing clients already operate at the concurrent
* operation limit.
*
* @param op - A callback function that returns a Promise. The client T will
* be returned to the pool when callback finishes.
* @return A Promise that resolves with the result of `op`.
*/
run(op) {
const client = this.acquire();
return op(client)
.catch(err => {
this.release(client);
return Promise.reject(err);
})
.then(res => {
this.release(client);
return res;
});
}
/**
* Deletes clients that are no longer executing operations. Keeps up to one
* idle client to reduce future initialization costs.
*/
garbageCollect() {
let idleClients = 0;
this.activeClients.forEach((requestCount, client) => {
if (requestCount === 0) {
++idleClients;
if (idleClients > 1) {
this.activeClients.delete(client);
}
}
});
}
}
exports.ClientPool = ClientPool;

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,228 @@
/*!
* Copyright 2018 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
'use strict';
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
const is_1 = __importDefault(require("is"));
const firestore_proto_api_1 = require("../protos/firestore_proto_api");
const timestamp_1 = require("./timestamp");
const field_value_1 = require("./field-value");
const validate_1 = require("./validate");
const path_1 = require("./path");
const convert_1 = require("./convert");
const geo_point_1 = require("./geo-point");
/**
* Serializer that is used to convert between JavaScripts types and their
* Firestore Protobuf representation.
*
* @private
*/
class Serializer {
constructor(firestore, timestampsInSnapshotsEnabled) {
this.timestampsInSnapshotsEnabled = timestampsInSnapshotsEnabled;
// Instead of storing the `firestore` object, we store just a reference to
// its `.doc()` method. This avoid a circular reference, which breaks
// JSON.stringify().
this.createReference = path => firestore.doc(path);
}
/**
* Encodes a JavaScrip object into the Firestore 'Fields' representation.
*
* @private
* @param obj The object to encode.
* @returns The Firestore 'Fields' representation
*/
encodeFields(obj) {
const fields = {};
for (const prop in obj) {
if (obj.hasOwnProperty(prop)) {
const val = this.encodeValue(obj[prop]);
if (val) {
fields[prop] = val;
}
}
}
return fields;
}
/**
* Encodes a JavaScript value into the Firestore 'Value' representation.
*
* @private
* @param val The object to encode
* @returns The Firestore Proto or null if we are deleting a field.
*/
encodeValue(val) {
if (val instanceof field_value_1.FieldTransform) {
return null;
}
if (is_1.default.string(val)) {
return {
stringValue: val,
};
}
if (is_1.default.bool(val)) {
return {
booleanValue: val,
};
}
if (is_1.default.integer(val)) {
return {
integerValue: val,
};
}
// Integers are handled above, the remaining numbers are treated as doubles
if (is_1.default.number(val)) {
return {
doubleValue: val,
};
}
if (is_1.default.date(val)) {
const timestamp = timestamp_1.Timestamp.fromDate(val);
return {
timestampValue: {
seconds: timestamp.seconds,
nanos: timestamp.nanoseconds,
},
};
}
if (val === null) {
return {
nullValue: firestore_proto_api_1.google.protobuf.NullValue.NULL_VALUE,
};
}
if (val instanceof Buffer || val instanceof Uint8Array) {
return {
bytesValue: val,
};
}
if (typeof val === 'object' && 'toProto' in val &&
typeof val.toProto === 'function') {
return val.toProto();
}
if (val instanceof Array) {
const array = {
arrayValue: {},
};
if (val.length > 0) {
array.arrayValue.values = [];
for (let i = 0; i < val.length; ++i) {
const enc = this.encodeValue(val[i]);
if (enc) {
array.arrayValue.values.push(enc);
}
}
}
return array;
}
if (typeof val === 'object' && isPlainObject(val)) {
const map = {
mapValue: {},
};
// If we encounter an empty object, we always need to send it to make sure
// the server creates a map entry.
if (!is_1.default.empty(val)) {
map.mapValue.fields = this.encodeFields(val);
if (is_1.default.empty(map.mapValue.fields)) {
return null;
}
}
return map;
}
throw validate_1.customObjectError(val);
}
/**
* Decodes a single Firestore 'Value' Protobuf.
*
* @private
* @param proto - A Firestore 'Value' Protobuf.
* @returns The converted JS type.
*/
decodeValue(proto) {
const valueType = convert_1.detectValueType(proto);
switch (valueType) {
case 'stringValue': {
return proto.stringValue;
}
case 'booleanValue': {
return proto.booleanValue;
}
case 'integerValue': {
return Number(proto.integerValue);
}
case 'doubleValue': {
return Number(proto.doubleValue);
}
case 'timestampValue': {
const timestamp = timestamp_1.Timestamp.fromProto(proto.timestampValue);
return this.timestampsInSnapshotsEnabled ? timestamp :
timestamp.toDate();
}
case 'referenceValue': {
const resourcePath = path_1.ResourcePath.fromSlashSeparatedString(proto.referenceValue);
return this.createReference(resourcePath.relativeName);
}
case 'arrayValue': {
const array = [];
if (is_1.default.array(proto.arrayValue.values)) {
for (const value of proto.arrayValue.values) {
array.push(this.decodeValue(value));
}
}
return array;
}
case 'nullValue': {
return null;
}
case 'mapValue': {
const obj = {};
const fields = proto.mapValue.fields;
for (const prop in fields) {
if (fields.hasOwnProperty(prop)) {
obj[prop] = this.decodeValue(fields[prop]);
}
}
return obj;
}
case 'geoPointValue': {
return geo_point_1.GeoPoint.fromProto(proto.geoPointValue);
}
case 'bytesValue': {
return proto.bytesValue;
}
default: {
throw new Error('Cannot decode type from Firestore Value: ' +
JSON.stringify(proto));
}
}
}
}
exports.Serializer = Serializer;
/**
* Verifies that 'obj' is a plain JavaScript object that can be encoded as a
* 'Map' in Firestore.
*
* @private
* @param input - The argument to verify.
* @returns 'true' if the input can be a treated as a plain object.
*/
function isPlainObject(input) {
return (typeof input === 'object' && input !== null &&
(Object.getPrototypeOf(input) === Object.prototype ||
Object.getPrototypeOf(input) === null));
}
exports.isPlainObject = isPlainObject;

View File

@ -0,0 +1,225 @@
/*!
* Copyright 2018 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
'use strict';
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
const is_1 = __importDefault(require("is"));
const validate_1 = require("./validate");
const validate = validate_1.createValidator();
/*!
* Number of nanoseconds in a millisecond.
*
* @type {number}
*/
const MS_TO_NANOS = 1000000;
/**
* A Timestamp represents a point in time independent of any time zone or
* calendar, represented as seconds and fractions of seconds at nanosecond
* resolution in UTC Epoch time. It is encoded using the Proleptic Gregorian
* Calendar which extends the Gregorian calendar backwards to year one. It is
* encoded assuming all minutes are 60 seconds long, i.e. leap seconds are
* "smeared" so that no leap second table is needed for interpretation. Range is
* from 0001-01-01T00:00:00Z to 9999-12-31T23:59:59.999999999Z.
*
* @see https://github.com/google/protobuf/blob/master/src/google/protobuf/timestamp.proto
*/
class Timestamp {
/**
* Creates a new timestamp with the current date, with millisecond precision.
*
* @example
* let documentRef = firestore.doc('col/doc');
*
* documentRef.set({ updateTime:Firestore.Timestamp.now() });
*
* @return {Timestamp} A new `Timestamp` representing the current date.
*/
static now() {
return Timestamp.fromMillis(Date.now());
}
/**
* Creates a new timestamp from the given date.
*
* @example
* let documentRef = firestore.doc('col/doc');
*
* let date = Date.parse('01 Jan 2000 00:00:00 GMT');
* documentRef.set({ startTime:Firestore.Timestamp.fromDate(date) });
*
* @param {Date} date The date to initialize the `Timestamp` from.
* @return {Timestamp} A new `Timestamp` representing the same point in time
* as the given date.
*/
static fromDate(date) {
return Timestamp.fromMillis(date.getTime());
}
/**
* Creates a new timestamp from the given number of milliseconds.
*
* @example
* let documentRef = firestore.doc('col/doc');
*
* documentRef.set({ startTime:Firestore.Timestamp.fromMillis(42) });
*
* @param {number} milliseconds Number of milliseconds since Unix epoch
* 1970-01-01T00:00:00Z.
* @return {Timestamp} A new `Timestamp` representing the same point in time
* as the given number of milliseconds.
*/
static fromMillis(milliseconds) {
const seconds = Math.floor(milliseconds / 1000);
const nanos = (milliseconds - seconds * 1000) * MS_TO_NANOS;
return new Timestamp(seconds, nanos);
}
/**
* Generates a `Timestamp` object from a Timestamp proto.
*
* @private
* @param {Object} timestamp The `Timestamp` Protobuf object.
*/
static fromProto(timestamp) {
return new Timestamp(Number(timestamp.seconds || 0), Number(timestamp.nanos || 0));
}
/**
* Creates a new timestamp.
*
* @example
* let documentRef = firestore.doc('col/doc');
*
* documentRef.set({ startTime:new Firestore.Timestamp(42, 0) });
*
* @param {number} seconds The number of seconds of UTC time since Unix epoch
* 1970-01-01T00:00:00Z. Must be from 0001-01-01T00:00:00Z to
* 9999-12-31T23:59:59Z inclusive.
* @param {number} nanoseconds The non-negative fractions of a second at
* nanosecond resolution. Negative second values with fractions must still
* have non-negative nanoseconds values that count forward in time. Must be
* from 0 to 999,999,999 inclusive.
*/
constructor(seconds, nanoseconds) {
validate.isInteger('seconds', seconds);
validate.isInteger('nanoseconds', nanoseconds, 0, 999999999);
this._seconds = seconds;
this._nanoseconds = nanoseconds;
}
/**
* The number of seconds of UTC time since Unix epoch 1970-01-01T00:00:00Z.
*
* @example
* let documentRef = firestore.doc('col/doc');
*
* documentRef.get().then(snap => {
* let updatedAt = snap.updateTime;
* console.log(`Updated at ${updated.seconds}s ${updated.nanoseconds}ns`);
* });
*
* @type {number}
*/
get seconds() {
return this._seconds;
}
/**
* The non-negative fractions of a second at nanosecond resolution.
*
* @example
* let documentRef = firestore.doc('col/doc');
*
* documentRef.get().then(snap => {
* let updated = snap.updateTime;
* console.log(`Updated at ${updated.seconds}s ${updated.nanoseconds}ns`);
* });
*
* @type {number}
*/
get nanoseconds() {
return this._nanoseconds;
}
/**
* Returns a new `Date` corresponding to this timestamp. This may lose
* precision.
*
* @example
* let documentRef = firestore.doc('col/doc');
*
* documentRef.get().then(snap => {
* console.log(`Document updated at: ${snap.updateTime.toDate()}`);
* });
*
* @return {Date} JavaScript `Date` object representing the same point in time
* as this `Timestamp`, with millisecond precision.
*/
toDate() {
return new Date(this._seconds * 1000 + Math.round(this._nanoseconds / MS_TO_NANOS));
}
/**
* Returns the number of milliseconds since Unix epoch 1970-01-01T00:00:00Z.
*
* @example
* let documentRef = firestore.doc('col/doc');
*
* documentRef.get().then(snap => {
* let startTime = snap.get('startTime');
* let endTime = snap.get('endTime');
* console.log(`Duration: ${endTime - startTime}`);
* });
*
* @return {number} The point in time corresponding to this timestamp,
* represented as the number of milliseconds since Unix epoch
* 1970-01-01T00:00:00Z.
*/
toMillis() {
return this._seconds * 1000 + Math.floor(this._nanoseconds / MS_TO_NANOS);
}
/**
* Returns 'true' if this `Timestamp` is equal to the provided one.
*
* @example
* let documentRef = firestore.doc('col/doc');
*
* documentRef.get().then(snap => {
* if (snap.createTime.isEqual(snap.updateTime)) {
* console.log('Document is in its initial state.');
* }
* });
*
* @param {any} other The `Timestamp` to compare against.
* @return {boolean} 'true' if this `Timestamp` is equal to the provided one.
*/
isEqual(other) {
return (this === other ||
(is_1.default.instanceof(other, Timestamp) && this._seconds === other.seconds &&
this._nanoseconds === other.nanoseconds));
}
/**
* Generates the Protobuf `Timestamp` object for this timestamp.
*
* @private
* @returns {Object} The `Timestamp` Protobuf object.
*/
toProto() {
const timestamp = {};
if (this.seconds) {
timestamp.seconds = this.seconds;
}
if (this.nanoseconds) {
timestamp.nanos = this.nanoseconds;
}
return { timestampValue: timestamp };
}
}
exports.Timestamp = Timestamp;

View File

@ -0,0 +1,311 @@
/*!
* Copyright 2017 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
'use strict';
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
const is_1 = __importDefault(require("is"));
const reference_1 = require("./reference");
const util_1 = require("./util");
/*!
* Error message for transactional reads that were executed after performing
* writes.
*/
const READ_AFTER_WRITE_ERROR_MSG = 'Firestore transactions require all reads to be executed before all writes.';
/*!
* Transactions can be retried if the initial stream opening errors out.
*/
const ALLOW_RETRIES = true;
/**
* A reference to a transaction.
*
* The Transaction object passed to a transaction's updateFunction provides
* the methods to read and write data within the transaction context. See
* [runTransaction()]{@link Firestore#runTransaction}.
*
* @class
*/
class Transaction {
/**
* @private
* @hideconstructor
*
* @param {Firestore} firestore - The Firestore Database client.
* @param {Transaction=} previousTransaction - If
* available, the failed transaction that is being retried.
*/
constructor(firestore, previousTransaction) {
this._firestore = firestore;
this._validator = firestore._validator;
this._previousTransaction = previousTransaction;
this._writeBatch = firestore.batch();
this._requestTag =
previousTransaction ? previousTransaction.requestTag : util_1.requestTag();
}
/**
* Retrieve a document or a query result from the database. Holds a
* pessimistic lock on all returned documents.
*
* @param {DocumentReference|Query} refOrQuery - The
* document or query to return.
* @returns {Promise} A Promise that resolves with a DocumentSnapshot or
* QuerySnapshot for the returned documents.
*
* @example
* firestore.runTransaction(transaction => {
* let documentRef = firestore.doc('col/doc');
* return transaction.get(documentRef).then(doc => {
* if (doc.exists) {
* transaction.update(documentRef, { count: doc.get('count') + 1 });
* } else {
* transaction.create(documentRef, { count: 1 });
* }
* });
* });
*/
get(refOrQuery) {
if (!this._writeBatch.isEmpty) {
throw new Error(READ_AFTER_WRITE_ERROR_MSG);
}
if (is_1.default.instance(refOrQuery, reference_1.DocumentReference)) {
return this._firestore
.getAll_([refOrQuery], this._requestTag, this._transactionId)
.then(res => {
return Promise.resolve(res[0]);
});
}
if (is_1.default.instance(refOrQuery, reference_1.Query)) {
return refOrQuery._get({ transactionId: this._transactionId });
}
throw new Error('Argument "refOrQuery" must be a DocumentRef or a Query.');
}
/**
* Retrieves multiple documents from Firestore. Holds a pessimistic lock on
* all returned documents.
*
* @param {...DocumentReference} documents - The document references
* to receive.
* @returns {Promise<Array.<DocumentSnapshot>>} A Promise that
* contains an array with the resulting document snapshots.
*
* @example
* let firstDoc = firestore.doc('col/doc1');
* let secondDoc = firestore.doc('col/doc2');
* let resultDoc = firestore.doc('col/doc3');
*
* firestore.runTransaction(transaction => {
* return transaction.getAll(firstDoc, secondDoc).then(docs => {
* transaction.set(resultDoc, {
* sum: docs[1].get('count') + docs[2].get('count')
* });
* });
* });
*/
getAll(documents) {
if (!this._writeBatch.isEmpty) {
throw new Error(READ_AFTER_WRITE_ERROR_MSG);
}
documents = is_1.default.array(arguments[0]) ? arguments[0].slice() :
Array.prototype.slice.call(arguments);
for (let i = 0; i < documents.length; ++i) {
this._validator.isDocumentReference(i, documents[i]);
}
return this._firestore.getAll_(documents, this._requestTag, this._transactionId);
}
/**
* Create the document referred to by the provided
* [DocumentReference]{@link DocumentReference}. The operation will
* fail the transaction if a document exists at the specified location.
*
* @param {DocumentReference} documentRef - A reference to the
* document to be created.
* @param {DocumentData} data - The object data to serialize as the document.
* @returns {Transaction} This Transaction instance. Used for
* chaining method calls.
*
* @example
* firestore.runTransaction(transaction => {
* let documentRef = firestore.doc('col/doc');
* return transaction.get(documentRef).then(doc => {
* if (!doc.exists) {
* transaction.create(documentRef, { foo: 'bar' });
* }
* });
* });
*/
create(documentRef, data) {
this._writeBatch.create(documentRef, data);
return this;
}
/**
* Writes to the document referred to by the provided
* [DocumentReference]{@link DocumentReference}. If the document
* does not exist yet, it will be created. If you pass
* [SetOptions]{@link SetOptions}, the provided data can be merged into the
* existing document.
*
* @param {DocumentReference} documentRef - A reference to the
* document to be set.
* @param {DocumentData} data - The object to serialize as the document.
* @param {SetOptions=} options - An object to configure the set behavior.
* @param {boolean=} options.merge - If true, set() merges the values
* specified in its data argument. Fields omitted from this set() call
* remain untouched.
* @param {Array.<string|FieldPath>=} options.mergeFields - If provided,
* set() only replaces the specified field paths. Any field path that is not
* specified is ignored and remains untouched.
* @returns {Transaction} This Transaction instance. Used for
* chaining method calls.
*
* @example
* firestore.runTransaction(transaction => {
* let documentRef = firestore.doc('col/doc');
* transaction.set(documentRef, { foo: 'bar' });
* return Promise.resolve();
* });
*/
set(documentRef, data, options) {
this._writeBatch.set(documentRef, data, options);
return this;
}
/**
* Updates fields in the document referred to by the provided
* [DocumentReference]{@link DocumentReference}. The update will
* fail if applied to a document that does not exist.
*
* The update() method accepts either an object with field paths encoded as
* keys and field values encoded as values, or a variable number of arguments
* that alternate between field paths and field values. Nested fields can be
* updated by providing dot-separated field path strings or by providing
* FieldPath objects.
*
* A Precondition restricting this update can be specified as the last
* argument.
*
* @param {DocumentReference} documentRef - A reference to the
* document to be updated.
* @param {UpdateData|string|FieldPath} dataOrField - An object
* containing the fields and values with which to update the document
* or the path of the first field to update.
* @param {
* ...(Precondition|*|string|FieldPath)} preconditionOrValues -
* An alternating list of field paths and values to update or a Precondition
* to to enforce on this update.
* @returns {Transaction} This Transaction instance. Used for
* chaining method calls.
*
* @example
* firestore.runTransaction(transaction => {
* let documentRef = firestore.doc('col/doc');
* return transaction.get(documentRef).then(doc => {
* if (doc.exists) {
* transaction.update(documentRef, { count: doc.get('count') + 1 });
* } else {
* transaction.create(documentRef, { count: 1 });
* }
* });
* });
*/
update(documentRef, dataOrField, preconditionOrValues) {
this._validator.minNumberOfArguments('update', arguments, 2);
preconditionOrValues = Array.prototype.slice.call(arguments, 2);
this._writeBatch.update.apply(this._writeBatch, [documentRef, dataOrField].concat(preconditionOrValues));
return this;
}
/**
* Deletes the document referred to by the provided [DocumentReference]
* {@link DocumentReference}.
*
* @param {DocumentReference} documentRef - A reference to the
* document to be deleted.
* @param {Precondition=} precondition - A precondition to enforce for this
* delete.
* @param {Timestamp=} precondition.lastUpdateTime If set, enforces that the
* document was last updated at lastUpdateTime. Fails the transaction if the
* document doesn't exist or was last updated at a different time.
* @returns {Transaction} This Transaction instance. Used for
* chaining method calls.
*
* @example
* firestore.runTransaction(transaction => {
* let documentRef = firestore.doc('col/doc');
* transaction.delete(documentRef);
* return Promise.resolve();
* });
*/
delete(documentRef, precondition) {
this._writeBatch.delete(documentRef, precondition);
return this;
}
/**
* Starts a transaction and obtains the transaction id from the server.
*
* @private
* @returns {Promise} An empty Promise.
*/
begin() {
let request = {
database: this._firestore.formattedName,
};
if (this._previousTransaction) {
request.options = {
readWrite: {
retryTransaction: this._previousTransaction._transactionId,
},
};
}
return this._firestore
.request('beginTransaction', request, this._requestTag, ALLOW_RETRIES)
.then(resp => {
this._transactionId = resp.transaction;
});
}
/**
* Commits all queued-up changes in this transaction and releases all locks.
*
* @private
* @returns {Promise} An empty Promise.
*/
commit() {
return this._writeBatch.commit_({
transactionId: this._transactionId,
requestTag: this._requestTag,
});
}
/**
* Releases all locks and rolls back this transaction.
*
* @private
* @returns {Promise} An empty Promise.
*/
rollback() {
let request = {
database: this._firestore.formattedName,
transaction: this._transactionId,
};
return this._firestore.request('rollback', request, this._requestTag);
}
/**
* Returns the tag to use with with all request for this Transaction.
* @private
* @return {string} A unique client-generated identifier for this transaction.
*/
get requestTag() {
return this._requestTag;
}
}
exports.Transaction = Transaction;

View File

@ -0,0 +1,17 @@
/*!
* Copyright 2018 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
'use strict';
Object.defineProperty(exports, "__esModule", { value: true });

View File

@ -0,0 +1,46 @@
/*!
* Copyright 2018 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
'use strict';
Object.defineProperty(exports, "__esModule", { value: true });
/**
* Generate a unique client-side identifier.
*
* Used for the creation of new documents.
*
* @private
* @returns {string} A unique 20-character wide identifier.
*/
function autoId() {
const chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789';
let autoId = '';
for (let i = 0; i < 20; i++) {
autoId += chars.charAt(Math.floor(Math.random() * chars.length));
}
return autoId;
}
exports.autoId = autoId;
/**
* Generate a short and semi-random client-side identifier.
*
* Used for the creation of request tags.
*
* @private
* @returns {string} A random 5-character wide identifier.
*/
function requestTag() {
return autoId().substr(0, 5);
}
exports.requestTag = requestTag;

View File

@ -0,0 +1,107 @@
"use strict";
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Note: this file is purely for documentation. Any contents are not expected
// to be loaded as the JS file.
/**
* A set of field paths on a document.
* Used to restrict a get or update operation on a document to a subset of its
* fields.
* This is different from standard field masks, as this is always scoped to a
* Document, and takes in account the dynamic nature of Value.
*
* @property {string[]} fieldPaths
* The list of field paths in the mask. See Document.fields for a field
* path syntax reference.
*
* @typedef DocumentMask
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.DocumentMask definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/common.proto}
*/
var DocumentMask = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* A precondition on a document, used for conditional operations.
*
* @property {boolean} exists
* When set to `true`, the target document must exist.
* When set to `false`, the target document must not exist.
*
* @property {Object} updateTime
* When set, the target document must exist and have been last updated at
* that time.
*
* This object should have the same structure as [Timestamp]{@link
* google.protobuf.Timestamp}
*
* @typedef Precondition
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.Precondition definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/common.proto}
*/
var Precondition = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* Options for creating a new transaction.
*
* @property {Object} readOnly
* The transaction can only be used for read operations.
*
* This object should have the same structure as [ReadOnly]{@link
* google.firestore.v1beta1.ReadOnly}
*
* @property {Object} readWrite
* The transaction can be used for both read and write operations.
*
* This object should have the same structure as [ReadWrite]{@link
* google.firestore.v1beta1.ReadWrite}
*
* @typedef TransactionOptions
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.TransactionOptions definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/common.proto}
*/
var TransactionOptions = {
// This is for documentation. Actual contents will be loaded by gRPC.
/**
* Options for a transaction that can be used to read and write documents.
*
* @property {string} retryTransaction
* An optional transaction to retry.
*
* @typedef ReadWrite
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.TransactionOptions.ReadWrite definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/common.proto}
*/
ReadWrite: {
// This is for documentation. Actual contents will be loaded by gRPC.
},
/**
* Options for a transaction that can only be used to read documents.
*
* @property {Object} readTime
* Reads documents at the given time.
* This may not be older than 60 seconds.
*
* This object should have the same structure as [Timestamp]{@link
* google.protobuf.Timestamp}
*
* @typedef ReadOnly
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.TransactionOptions.ReadOnly definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/common.proto}
*/
ReadOnly: {
// This is for documentation. Actual contents will be loaded by gRPC.
}
};

View File

@ -0,0 +1,183 @@
"use strict";
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Note: this file is purely for documentation. Any contents are not expected
// to be loaded as the JS file.
/**
* A Firestore document.
*
* Must not exceed 1 MiB - 4 bytes.
*
* @property {string} name
* The resource name of the document, for example
* `projects/{project_id}/databases/{database_id}/documents/{document_path}`.
*
* @property {Object.<string, Object>} fields
* The document's fields.
*
* The map keys represent field names.
*
* A simple field name contains only characters `a` to `z`, `A` to `Z`,
* `0` to `9`, or `_`, and must not start with `0` to `9` or `_`. For example,
* `foo_bar_17`.
*
* Field names matching the regular expression `__.*__` are reserved. Reserved
* field names are forbidden except in certain documented contexts. The map
* keys, represented as UTF-8, must not exceed 1,500 bytes and cannot be
* empty.
*
* Field paths may be used in other contexts to refer to structured fields
* defined here. For `map_value`, the field path is represented by the simple
* or quoted field names of the containing fields, delimited by `.`. For
* example, the structured field
* `"foo" : { map_value: { "x&y" : { string_value: "hello" }}}` would be
* represented by the field path `foo.x&y`.
*
* Within a field path, a quoted field name starts and ends with `` ` `` and
* may contain any character. Some characters, including `` ` ``, must be
* escaped using a `\`. For example, `` `x&y` `` represents `x&y` and
* `` `bak\`tik` `` represents `` bak`tik ``.
*
* @property {Object} createTime
* Output only. The time at which the document was created.
*
* This value increases monotonically when a document is deleted then
* recreated. It can also be compared to values from other documents and
* the `read_time` of a query.
*
* This object should have the same structure as [Timestamp]{@link
* google.protobuf.Timestamp}
*
* @property {Object} updateTime
* Output only. The time at which the document was last changed.
*
* This value is initally set to the `create_time` then increases
* monotonically with each change to the document. It can also be
* compared to values from other documents and the `read_time` of a query.
*
* This object should have the same structure as [Timestamp]{@link
* google.protobuf.Timestamp}
*
* @typedef Document
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.Document definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/document.proto}
*/
var Document = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* A message that can hold any of the supported value types.
*
* @property {number} nullValue
* A null value.
*
* The number should be among the values of [NullValue]{@link
* google.protobuf.NullValue}
*
* @property {boolean} booleanValue
* A boolean value.
*
* @property {number} integerValue
* An integer value.
*
* @property {number} doubleValue
* A double value.
*
* @property {Object} timestampValue
* A timestamp value.
*
* Precise only to microseconds. When stored, any additional precision is
* rounded down.
*
* This object should have the same structure as [Timestamp]{@link
* google.protobuf.Timestamp}
*
* @property {string} stringValue
* A string value.
*
* The string, represented as UTF-8, must not exceed 1 MiB - 89 bytes.
* Only the first 1,500 bytes of the UTF-8 representation are considered by
* queries.
*
* @property {string} bytesValue
* A bytes value.
*
* Must not exceed 1 MiB - 89 bytes.
* Only the first 1,500 bytes are considered by queries.
*
* @property {string} referenceValue
* A reference to a document. For example:
* `projects/{project_id}/databases/{database_id}/documents/{document_path}`.
*
* @property {Object} geoPointValue
* A geo point value representing a point on the surface of Earth.
*
* This object should have the same structure as [LatLng]{@link
* google.type.LatLng}
*
* @property {Object} arrayValue
* An array value.
*
* Cannot contain another array value.
*
* This object should have the same structure as [ArrayValue]{@link
* google.firestore.v1beta1.ArrayValue}
*
* @property {Object} mapValue
* A map value.
*
* This object should have the same structure as [MapValue]{@link
* google.firestore.v1beta1.MapValue}
*
* @typedef Value
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.Value definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/document.proto}
*/
var Value = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* An array value.
*
* @property {Object[]} values
* Values in the array.
*
* This object should have the same structure as [Value]{@link
* google.firestore.v1beta1.Value}
*
* @typedef ArrayValue
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.ArrayValue definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/document.proto}
*/
var ArrayValue = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* A map value.
*
* @property {Object.<string, Object>} fields
* The map's fields.
*
* The map keys represent field names. Field names matching the regular
* expression `__.*__` are reserved. Reserved field names are forbidden except
* in certain documented contexts. The map keys, represented as UTF-8, must
* not exceed 1,500 bytes and cannot be empty.
*
* @typedef MapValue
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.MapValue definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/document.proto}
*/
var MapValue = {
// This is for documentation. Actual contents will be loaded by gRPC.
};

View File

@ -0,0 +1,881 @@
"use strict";
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Note: this file is purely for documentation. Any contents are not expected
// to be loaded as the JS file.
/**
* The request for Firestore.GetDocument.
*
* @property {string} name
* The resource name of the Document to get. In the format:
* `projects/{project_id}/databases/{database_id}/documents/{document_path}`.
*
* @property {Object} mask
* The fields to return. If not set, returns all fields.
*
* If the document has a field that is not present in this mask, that field
* will not be returned in the response.
*
* This object should have the same structure as [DocumentMask]{@link
* google.firestore.v1beta1.DocumentMask}
*
* @property {string} transaction
* Reads the document in a transaction.
*
* @property {Object} readTime
* Reads the version of the document at the given time.
* This may not be older than 60 seconds.
*
* This object should have the same structure as [Timestamp]{@link
* google.protobuf.Timestamp}
*
* @typedef GetDocumentRequest
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.GetDocumentRequest definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/firestore.proto}
*/
var GetDocumentRequest = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* The request for Firestore.ListDocuments.
*
* @property {string} parent
* The parent resource name. In the format:
* `projects/{project_id}/databases/{database_id}/documents` or
* `projects/{project_id}/databases/{database_id}/documents/{document_path}`.
* For example:
* `projects/my-project/databases/my-database/documents` or
* `projects/my-project/databases/my-database/documents/chatrooms/my-chatroom`
*
* @property {string} collectionId
* The collection ID, relative to `parent`, to list. For example: `chatrooms`
* or `messages`.
*
* @property {number} pageSize
* The maximum number of documents to return.
*
* @property {string} pageToken
* The `next_page_token` value returned from a previous List request, if any.
*
* @property {string} orderBy
* The order to sort results by. For example: `priority desc, name`.
*
* @property {Object} mask
* The fields to return. If not set, returns all fields.
*
* If a document has a field that is not present in this mask, that field
* will not be returned in the response.
*
* This object should have the same structure as [DocumentMask]{@link
* google.firestore.v1beta1.DocumentMask}
*
* @property {string} transaction
* Reads documents in a transaction.
*
* @property {Object} readTime
* Reads documents as they were at the given time.
* This may not be older than 60 seconds.
*
* This object should have the same structure as [Timestamp]{@link
* google.protobuf.Timestamp}
*
* @property {boolean} showMissing
* If the list should show missing documents. A missing document is a
* document that does not exist but has sub-documents. These documents will
* be returned with a key but will not have fields, Document.create_time,
* or Document.update_time set.
*
* Requests with `show_missing` may not specify `where` or
* `order_by`.
*
* @typedef ListDocumentsRequest
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.ListDocumentsRequest definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/firestore.proto}
*/
var ListDocumentsRequest = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* The response for Firestore.ListDocuments.
*
* @property {Object[]} documents
* The Documents found.
*
* This object should have the same structure as [Document]{@link
* google.firestore.v1beta1.Document}
*
* @property {string} nextPageToken
* The next page token.
*
* @typedef ListDocumentsResponse
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.ListDocumentsResponse definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/firestore.proto}
*/
var ListDocumentsResponse = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* The request for Firestore.CreateDocument.
*
* @property {string} parent
* The parent resource. For example:
* `projects/{project_id}/databases/{database_id}/documents` or
* `projects/{project_id}/databases/{database_id}/documents/chatrooms/{chatroom_id}`
*
* @property {string} collectionId
* The collection ID, relative to `parent`, to list. For example: `chatrooms`.
*
* @property {string} documentId
* The client-assigned document ID to use for this document.
*
* Optional. If not specified, an ID will be assigned by the service.
*
* @property {Object} document
* The document to create. `name` must not be set.
*
* This object should have the same structure as [Document]{@link
* google.firestore.v1beta1.Document}
*
* @property {Object} mask
* The fields to return. If not set, returns all fields.
*
* If the document has a field that is not present in this mask, that field
* will not be returned in the response.
*
* This object should have the same structure as [DocumentMask]{@link
* google.firestore.v1beta1.DocumentMask}
*
* @typedef CreateDocumentRequest
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.CreateDocumentRequest definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/firestore.proto}
*/
var CreateDocumentRequest = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* The request for Firestore.UpdateDocument.
*
* @property {Object} document
* The updated document.
* Creates the document if it does not already exist.
*
* This object should have the same structure as [Document]{@link
* google.firestore.v1beta1.Document}
*
* @property {Object} updateMask
* The fields to update.
* None of the field paths in the mask may contain a reserved name.
*
* If the document exists on the server and has fields not referenced in the
* mask, they are left unchanged.
* Fields referenced in the mask, but not present in the input document, are
* deleted from the document on the server.
*
* This object should have the same structure as [DocumentMask]{@link
* google.firestore.v1beta1.DocumentMask}
*
* @property {Object} mask
* The fields to return. If not set, returns all fields.
*
* If the document has a field that is not present in this mask, that field
* will not be returned in the response.
*
* This object should have the same structure as [DocumentMask]{@link
* google.firestore.v1beta1.DocumentMask}
*
* @property {Object} currentDocument
* An optional precondition on the document.
* The request will fail if this is set and not met by the target document.
*
* This object should have the same structure as [Precondition]{@link
* google.firestore.v1beta1.Precondition}
*
* @typedef UpdateDocumentRequest
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.UpdateDocumentRequest definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/firestore.proto}
*/
var UpdateDocumentRequest = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* The request for Firestore.DeleteDocument.
*
* @property {string} name
* The resource name of the Document to delete. In the format:
* `projects/{project_id}/databases/{database_id}/documents/{document_path}`.
*
* @property {Object} currentDocument
* An optional precondition on the document.
* The request will fail if this is set and not met by the target document.
*
* This object should have the same structure as [Precondition]{@link
* google.firestore.v1beta1.Precondition}
*
* @typedef DeleteDocumentRequest
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.DeleteDocumentRequest definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/firestore.proto}
*/
var DeleteDocumentRequest = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* The request for Firestore.BatchGetDocuments.
*
* @property {string} database
* The database name. In the format:
* `projects/{project_id}/databases/{database_id}`.
*
* @property {string[]} documents
* The names of the documents to retrieve. In the format:
* `projects/{project_id}/databases/{database_id}/documents/{document_path}`.
* The request will fail if any of the document is not a child resource of the
* given `database`. Duplicate names will be elided.
*
* @property {Object} mask
* The fields to return. If not set, returns all fields.
*
* If a document has a field that is not present in this mask, that field will
* not be returned in the response.
*
* This object should have the same structure as [DocumentMask]{@link
* google.firestore.v1beta1.DocumentMask}
*
* @property {string} transaction
* Reads documents in a transaction.
*
* @property {Object} newTransaction
* Starts a new transaction and reads the documents.
* Defaults to a read-only transaction.
* The new transaction ID will be returned as the first response in the
* stream.
*
* This object should have the same structure as [TransactionOptions]{@link
* google.firestore.v1beta1.TransactionOptions}
*
* @property {Object} readTime
* Reads documents as they were at the given time.
* This may not be older than 60 seconds.
*
* This object should have the same structure as [Timestamp]{@link
* google.protobuf.Timestamp}
*
* @typedef BatchGetDocumentsRequest
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.BatchGetDocumentsRequest definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/firestore.proto}
*/
var BatchGetDocumentsRequest = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* The streamed response for Firestore.BatchGetDocuments.
*
* @property {Object} found
* A document that was requested.
*
* This object should have the same structure as [Document]{@link
* google.firestore.v1beta1.Document}
*
* @property {string} missing
* A document name that was requested but does not exist. In the format:
* `projects/{project_id}/databases/{database_id}/documents/{document_path}`.
*
* @property {string} transaction
* The transaction that was started as part of this request.
* Will only be set in the first response, and only if
* BatchGetDocumentsRequest.new_transaction was set in the request.
*
* @property {Object} readTime
* The time at which the document was read.
* This may be monotically increasing, in this case the previous documents in
* the result stream are guaranteed not to have changed between their
* read_time and this one.
*
* This object should have the same structure as [Timestamp]{@link
* google.protobuf.Timestamp}
*
* @typedef BatchGetDocumentsResponse
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.BatchGetDocumentsResponse definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/firestore.proto}
*/
var BatchGetDocumentsResponse = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* The request for Firestore.BeginTransaction.
*
* @property {string} database
* The database name. In the format:
* `projects/{project_id}/databases/{database_id}`.
*
* @property {Object} options
* The options for the transaction.
* Defaults to a read-write transaction.
*
* This object should have the same structure as [TransactionOptions]{@link
* google.firestore.v1beta1.TransactionOptions}
*
* @typedef BeginTransactionRequest
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.BeginTransactionRequest definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/firestore.proto}
*/
var BeginTransactionRequest = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* The response for Firestore.BeginTransaction.
*
* @property {string} transaction
* The transaction that was started.
*
* @typedef BeginTransactionResponse
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.BeginTransactionResponse definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/firestore.proto}
*/
var BeginTransactionResponse = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* The request for Firestore.Commit.
*
* @property {string} database
* The database name. In the format:
* `projects/{project_id}/databases/{database_id}`.
*
* @property {Object[]} writes
* The writes to apply.
*
* Always executed atomically and in order.
*
* This object should have the same structure as [Write]{@link
* google.firestore.v1beta1.Write}
*
* @property {string} transaction
* If set, applies all writes in this transaction, and commits it.
*
* @typedef CommitRequest
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.CommitRequest definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/firestore.proto}
*/
var CommitRequest = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* The response for Firestore.Commit.
*
* @property {Object[]} writeResults
* The result of applying the writes.
*
* This i-th write result corresponds to the i-th write in the
* request.
*
* This object should have the same structure as [WriteResult]{@link
* google.firestore.v1beta1.WriteResult}
*
* @property {Object} commitTime
* The time at which the commit occurred.
*
* This object should have the same structure as [Timestamp]{@link
* google.protobuf.Timestamp}
*
* @typedef CommitResponse
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.CommitResponse definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/firestore.proto}
*/
var CommitResponse = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* The request for Firestore.Rollback.
*
* @property {string} database
* The database name. In the format:
* `projects/{project_id}/databases/{database_id}`.
*
* @property {string} transaction
* The transaction to roll back.
*
* @typedef RollbackRequest
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.RollbackRequest definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/firestore.proto}
*/
var RollbackRequest = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* The request for Firestore.RunQuery.
*
* @property {string} parent
* The parent resource name. In the format:
* `projects/{project_id}/databases/{database_id}/documents` or
* `projects/{project_id}/databases/{database_id}/documents/{document_path}`.
* For example:
* `projects/my-project/databases/my-database/documents` or
* `projects/my-project/databases/my-database/documents/chatrooms/my-chatroom`
*
* @property {Object} structuredQuery
* A structured query.
*
* This object should have the same structure as [StructuredQuery]{@link
* google.firestore.v1beta1.StructuredQuery}
*
* @property {string} transaction
* Reads documents in a transaction.
*
* @property {Object} newTransaction
* Starts a new transaction and reads the documents.
* Defaults to a read-only transaction.
* The new transaction ID will be returned as the first response in the
* stream.
*
* This object should have the same structure as [TransactionOptions]{@link
* google.firestore.v1beta1.TransactionOptions}
*
* @property {Object} readTime
* Reads documents as they were at the given time.
* This may not be older than 60 seconds.
*
* This object should have the same structure as [Timestamp]{@link
* google.protobuf.Timestamp}
*
* @typedef RunQueryRequest
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.RunQueryRequest definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/firestore.proto}
*/
var RunQueryRequest = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* The response for Firestore.RunQuery.
*
* @property {string} transaction
* The transaction that was started as part of this request.
* Can only be set in the first response, and only if
* RunQueryRequest.new_transaction was set in the request.
* If set, no other fields will be set in this response.
*
* @property {Object} document
* A query result.
* Not set when reporting partial progress.
*
* This object should have the same structure as [Document]{@link
* google.firestore.v1beta1.Document}
*
* @property {Object} readTime
* The time at which the document was read. This may be monotonically
* increasing; in this case, the previous documents in the result stream are
* guaranteed not to have changed between their `read_time` and this one.
*
* If the query returns no results, a response with `read_time` and no
* `document` will be sent, and this represents the time at which the query
* was run.
*
* This object should have the same structure as [Timestamp]{@link
* google.protobuf.Timestamp}
*
* @property {number} skippedResults
* The number of results that have been skipped due to an offset between
* the last response and the current response.
*
* @typedef RunQueryResponse
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.RunQueryResponse definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/firestore.proto}
*/
var RunQueryResponse = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* The request for Firestore.Write.
*
* The first request creates a stream, or resumes an existing one from a token.
*
* When creating a new stream, the server replies with a response containing
* only an ID and a token, to use in the next request.
*
* When resuming a stream, the server first streams any responses later than the
* given token, then a response containing only an up-to-date token, to use in
* the next request.
*
* @property {string} database
* The database name. In the format:
* `projects/{project_id}/databases/{database_id}`.
* This is only required in the first message.
*
* @property {string} streamId
* The ID of the write stream to resume.
* This may only be set in the first message. When left empty, a new write
* stream will be created.
*
* @property {Object[]} writes
* The writes to apply.
*
* Always executed atomically and in order.
* This must be empty on the first request.
* This may be empty on the last request.
* This must not be empty on all other requests.
*
* This object should have the same structure as [Write]{@link
* google.firestore.v1beta1.Write}
*
* @property {string} streamToken
* A stream token that was previously sent by the server.
*
* The client should set this field to the token from the most recent
* WriteResponse it has received. This acknowledges that the client has
* received responses up to this token. After sending this token, earlier
* tokens may not be used anymore.
*
* The server may close the stream if there are too many unacknowledged
* responses.
*
* Leave this field unset when creating a new stream. To resume a stream at
* a specific point, set this field and the `stream_id` field.
*
* Leave this field unset when creating a new stream.
*
* @property {Object.<string, string>} labels
* Labels associated with this write request.
*
* @typedef WriteRequest
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.WriteRequest definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/firestore.proto}
*/
var WriteRequest = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* The response for Firestore.Write.
*
* @property {string} streamId
* The ID of the stream.
* Only set on the first message, when a new stream was created.
*
* @property {string} streamToken
* A token that represents the position of this response in the stream.
* This can be used by a client to resume the stream at this point.
*
* This field is always set.
*
* @property {Object[]} writeResults
* The result of applying the writes.
*
* This i-th write result corresponds to the i-th write in the
* request.
*
* This object should have the same structure as [WriteResult]{@link
* google.firestore.v1beta1.WriteResult}
*
* @property {Object} commitTime
* The time at which the commit occurred.
*
* This object should have the same structure as [Timestamp]{@link
* google.protobuf.Timestamp}
*
* @typedef WriteResponse
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.WriteResponse definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/firestore.proto}
*/
var WriteResponse = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* A request for Firestore.Listen
*
* @property {string} database
* The database name. In the format:
* `projects/{project_id}/databases/{database_id}`.
*
* @property {Object} addTarget
* A target to add to this stream.
*
* This object should have the same structure as [Target]{@link
* google.firestore.v1beta1.Target}
*
* @property {number} removeTarget
* The ID of a target to remove from this stream.
*
* @property {Object.<string, string>} labels
* Labels associated with this target change.
*
* @typedef ListenRequest
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.ListenRequest definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/firestore.proto}
*/
var ListenRequest = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* The response for Firestore.Listen.
*
* @property {Object} targetChange
* Targets have changed.
*
* This object should have the same structure as [TargetChange]{@link
* google.firestore.v1beta1.TargetChange}
*
* @property {Object} documentChange
* A Document has changed.
*
* This object should have the same structure as [DocumentChange]{@link
* google.firestore.v1beta1.DocumentChange}
*
* @property {Object} documentDelete
* A Document has been deleted.
*
* This object should have the same structure as [DocumentDelete]{@link
* google.firestore.v1beta1.DocumentDelete}
*
* @property {Object} documentRemove
* A Document has been removed from a target (because it is no longer
* relevant to that target).
*
* This object should have the same structure as [DocumentRemove]{@link
* google.firestore.v1beta1.DocumentRemove}
*
* @property {Object} filter
* A filter to apply to the set of documents previously returned for the
* given target.
*
* Returned when documents may have been removed from the given target, but
* the exact documents are unknown.
*
* This object should have the same structure as [ExistenceFilter]{@link
* google.firestore.v1beta1.ExistenceFilter}
*
* @typedef ListenResponse
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.ListenResponse definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/firestore.proto}
*/
var ListenResponse = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* A specification of a set of documents to listen to.
*
* @property {Object} query
* A target specified by a query.
*
* This object should have the same structure as [QueryTarget]{@link
* google.firestore.v1beta1.QueryTarget}
*
* @property {Object} documents
* A target specified by a set of document names.
*
* This object should have the same structure as [DocumentsTarget]{@link
* google.firestore.v1beta1.DocumentsTarget}
*
* @property {string} resumeToken
* A resume token from a prior TargetChange for an identical target.
*
* Using a resume token with a different target is unsupported and may fail.
*
* @property {Object} readTime
* Start listening after a specific `read_time`.
*
* The client must know the state of matching documents at this time.
*
* This object should have the same structure as [Timestamp]{@link
* google.protobuf.Timestamp}
*
* @property {number} targetId
* A client provided target ID.
*
* If not set, the server will assign an ID for the target.
*
* Used for resuming a target without changing IDs. The IDs can either be
* client-assigned or be server-assigned in a previous stream. All targets
* with client provided IDs must be added before adding a target that needs
* a server-assigned id.
*
* @property {boolean} once
* If the target should be removed once it is current and consistent.
*
* @typedef Target
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.Target definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/firestore.proto}
*/
var Target = {
// This is for documentation. Actual contents will be loaded by gRPC.
/**
* A target specified by a set of documents names.
*
* @property {string[]} documents
* The names of the documents to retrieve. In the format:
* `projects/{project_id}/databases/{database_id}/documents/{document_path}`.
* The request will fail if any of the document is not a child resource of
* the given `database`. Duplicate names will be elided.
*
* @typedef DocumentsTarget
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.Target.DocumentsTarget definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/firestore.proto}
*/
DocumentsTarget: {
// This is for documentation. Actual contents will be loaded by gRPC.
},
/**
* A target specified by a query.
*
* @property {string} parent
* The parent resource name. In the format:
* `projects/{project_id}/databases/{database_id}/documents` or
* `projects/{project_id}/databases/{database_id}/documents/{document_path}`.
* For example:
* `projects/my-project/databases/my-database/documents` or
* `projects/my-project/databases/my-database/documents/chatrooms/my-chatroom`
*
* @property {Object} structuredQuery
* A structured query.
*
* This object should have the same structure as [StructuredQuery]{@link
* google.firestore.v1beta1.StructuredQuery}
*
* @typedef QueryTarget
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.Target.QueryTarget definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/firestore.proto}
*/
QueryTarget: {
// This is for documentation. Actual contents will be loaded by gRPC.
}
};
/**
* Targets being watched have changed.
*
* @property {number} targetChangeType
* The type of change that occurred.
*
* The number should be among the values of [TargetChangeType]{@link
* google.firestore.v1beta1.TargetChangeType}
*
* @property {number[]} targetIds
* The target IDs of targets that have changed.
*
* If empty, the change applies to all targets.
*
* For `target_change_type=ADD`, the order of the target IDs matches the order
* of the requests to add the targets. This allows clients to unambiguously
* associate server-assigned target IDs with added targets.
*
* For other states, the order of the target IDs is not defined.
*
* @property {Object} cause
* The error that resulted in this change, if applicable.
*
* This object should have the same structure as [Status]{@link
* google.rpc.Status}
*
* @property {string} resumeToken
* A token that can be used to resume the stream for the given `target_ids`,
* or all targets if `target_ids` is empty.
*
* Not set on every target change.
*
* @property {Object} readTime
* The consistent `read_time` for the given `target_ids` (omitted when the
* target_ids are not at a consistent snapshot).
*
* The stream is guaranteed to send a `read_time` with `target_ids` empty
* whenever the entire stream reaches a new consistent snapshot. ADD,
* CURRENT, and RESET messages are guaranteed to (eventually) result in a
* new consistent snapshot (while NO_CHANGE and REMOVE messages are not).
*
* For a given stream, `read_time` is guaranteed to be monotonically
* increasing.
*
* This object should have the same structure as [Timestamp]{@link
* google.protobuf.Timestamp}
*
* @typedef TargetChange
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.TargetChange definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/firestore.proto}
*/
var TargetChange = {
// This is for documentation. Actual contents will be loaded by gRPC.
/**
* The type of change.
*
* @enum {number}
* @memberof google.firestore.v1beta1
*/
TargetChangeType: {
/**
* No change has occurred. Used only to send an updated `resume_token`.
*/
NO_CHANGE: 0,
/**
* The targets have been added.
*/
ADD: 1,
/**
* The targets have been removed.
*/
REMOVE: 2,
/**
* The targets reflect all changes committed before the targets were added
* to the stream.
*
* This will be sent after or with a `read_time` that is greater than or
* equal to the time at which the targets were added.
*
* Listeners can wait for this change if read-after-write semantics
* are desired.
*/
CURRENT: 3,
/**
* The targets have been reset, and a new initial state for the targets
* will be returned in subsequent changes.
*
* After the initial state is complete, `CURRENT` will be returned even
* if the target was previously indicated to be `CURRENT`.
*/
RESET: 4
}
};
/**
* The request for Firestore.ListCollectionIds.
*
* @property {string} parent
* The parent document. In the format:
* `projects/{project_id}/databases/{database_id}/documents/{document_path}`.
* For example:
* `projects/my-project/databases/my-database/documents/chatrooms/my-chatroom`
*
* @property {number} pageSize
* The maximum number of results to return.
*
* @property {string} pageToken
* A page token. Must be a value from
* ListCollectionIdsResponse.
*
* @typedef ListCollectionIdsRequest
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.ListCollectionIdsRequest definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/firestore.proto}
*/
var ListCollectionIdsRequest = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* The response from Firestore.ListCollectionIds.
*
* @property {string[]} collectionIds
* The collection ids.
*
* @property {string} nextPageToken
* A page token that may be used to continue the list.
*
* @typedef ListCollectionIdsResponse
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.ListCollectionIdsResponse definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/firestore.proto}
*/
var ListCollectionIdsResponse = {
// This is for documentation. Actual contents will be loaded by gRPC.
};

View File

@ -0,0 +1,379 @@
"use strict";
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Note: this file is purely for documentation. Any contents are not expected
// to be loaded as the JS file.
/**
* A Firestore query.
*
* @property {Object} select
* The projection to return.
*
* This object should have the same structure as [Projection]{@link
* google.firestore.v1beta1.Projection}
*
* @property {Object[]} from
* The collections to query.
*
* This object should have the same structure as [CollectionSelector]{@link
* google.firestore.v1beta1.CollectionSelector}
*
* @property {Object} where
* The filter to apply.
*
* This object should have the same structure as [Filter]{@link
* google.firestore.v1beta1.Filter}
*
* @property {Object[]} orderBy
* The order to apply to the query results.
*
* Firestore guarantees a stable ordering through the following rules:
*
* * Any field required to appear in `order_by`, that is not already
* specified in `order_by`, is appended to the order in field name order
* by default.
* * If an order on `__name__` is not specified, it is appended by default.
*
* Fields are appended with the same sort direction as the last order
* specified, or 'ASCENDING' if no order was specified. For example:
*
* * `SELECT * FROM Foo ORDER BY A` becomes
* `SELECT * FROM Foo ORDER BY A, __name__`
* * `SELECT * FROM Foo ORDER BY A DESC` becomes
* `SELECT * FROM Foo ORDER BY A DESC, __name__ DESC`
* * `SELECT * FROM Foo WHERE A > 1` becomes
* `SELECT * FROM Foo WHERE A > 1 ORDER BY A, __name__`
*
* This object should have the same structure as [Order]{@link
* google.firestore.v1beta1.Order}
*
* @property {Object} startAt
* A starting point for the query results.
*
* This object should have the same structure as [Cursor]{@link
* google.firestore.v1beta1.Cursor}
*
* @property {Object} endAt
* A end point for the query results.
*
* This object should have the same structure as [Cursor]{@link
* google.firestore.v1beta1.Cursor}
*
* @property {number} offset
* The number of results to skip.
*
* Applies before limit, but after all other constraints. Must be >= 0 if
* specified.
*
* @property {Object} limit
* The maximum number of results to return.
*
* Applies after all other constraints.
* Must be >= 0 if specified.
*
* This object should have the same structure as [Int32Value]{@link
* google.protobuf.Int32Value}
*
* @typedef StructuredQuery
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.StructuredQuery definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/query.proto}
*/
var StructuredQuery = {
// This is for documentation. Actual contents will be loaded by gRPC.
/**
* A selection of a collection, such as `messages as m1`.
*
* @property {string} collectionId
* The collection ID.
* When set, selects only collections with this ID.
*
* @property {boolean} allDescendants
* When false, selects only collections that are immediate children of
* the `parent` specified in the containing `RunQueryRequest`.
* When true, selects all descendant collections.
*
* @typedef CollectionSelector
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.StructuredQuery.CollectionSelector definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/query.proto}
*/
CollectionSelector: {
// This is for documentation. Actual contents will be loaded by gRPC.
},
/**
* A filter.
*
* @property {Object} compositeFilter
* A composite filter.
*
* This object should have the same structure as [CompositeFilter]{@link
* google.firestore.v1beta1.CompositeFilter}
*
* @property {Object} fieldFilter
* A filter on a document field.
*
* This object should have the same structure as [FieldFilter]{@link
* google.firestore.v1beta1.FieldFilter}
*
* @property {Object} unaryFilter
* A filter that takes exactly one argument.
*
* This object should have the same structure as [UnaryFilter]{@link
* google.firestore.v1beta1.UnaryFilter}
*
* @typedef Filter
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.StructuredQuery.Filter definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/query.proto}
*/
Filter: {
// This is for documentation. Actual contents will be loaded by gRPC.
},
/**
* A filter that merges multiple other filters using the given operator.
*
* @property {number} op
* The operator for combining multiple filters.
*
* The number should be among the values of [Operator]{@link
* google.firestore.v1beta1.Operator}
*
* @property {Object[]} filters
* The list of filters to combine.
* Must contain at least one filter.
*
* This object should have the same structure as [Filter]{@link
* google.firestore.v1beta1.Filter}
*
* @typedef CompositeFilter
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.StructuredQuery.CompositeFilter definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/query.proto}
*/
CompositeFilter: {
// This is for documentation. Actual contents will be loaded by gRPC.
/**
* A composite filter operator.
*
* @enum {number}
* @memberof google.firestore.v1beta1
*/
Operator: {
/**
* Unspecified. This value must not be used.
*/
OPERATOR_UNSPECIFIED: 0,
/**
* The results are required to satisfy each of the combined filters.
*/
AND: 1
}
},
/**
* A filter on a specific field.
*
* @property {Object} field
* The field to filter by.
*
* This object should have the same structure as [FieldReference]{@link
* google.firestore.v1beta1.FieldReference}
*
* @property {number} op
* The operator to filter by.
*
* The number should be among the values of [Operator]{@link
* google.firestore.v1beta1.Operator}
*
* @property {Object} value
* The value to compare to.
*
* This object should have the same structure as [Value]{@link
* google.firestore.v1beta1.Value}
*
* @typedef FieldFilter
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.StructuredQuery.FieldFilter definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/query.proto}
*/
FieldFilter: {
// This is for documentation. Actual contents will be loaded by gRPC.
/**
* A field filter operator.
*
* @enum {number}
* @memberof google.firestore.v1beta1
*/
Operator: {
/**
* Unspecified. This value must not be used.
*/
OPERATOR_UNSPECIFIED: 0,
/**
* Less than. Requires that the field come first in `order_by`.
*/
LESS_THAN: 1,
/**
* Less than or equal. Requires that the field come first in `order_by`.
*/
LESS_THAN_OR_EQUAL: 2,
/**
* Greater than. Requires that the field come first in `order_by`.
*/
GREATER_THAN: 3,
/**
* Greater than or equal. Requires that the field come first in
* `order_by`.
*/
GREATER_THAN_OR_EQUAL: 4,
/**
* Equal.
*/
EQUAL: 5
}
},
/**
* A filter with a single operand.
*
* @property {number} op
* The unary operator to apply.
*
* The number should be among the values of [Operator]{@link
* google.firestore.v1beta1.Operator}
*
* @property {Object} field
* The field to which to apply the operator.
*
* This object should have the same structure as [FieldReference]{@link
* google.firestore.v1beta1.FieldReference}
*
* @typedef UnaryFilter
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.StructuredQuery.UnaryFilter definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/query.proto}
*/
UnaryFilter: {
// This is for documentation. Actual contents will be loaded by gRPC.
/**
* A unary operator.
*
* @enum {number}
* @memberof google.firestore.v1beta1
*/
Operator: {
/**
* Unspecified. This value must not be used.
*/
OPERATOR_UNSPECIFIED: 0,
/**
* Test if a field is equal to NaN.
*/
IS_NAN: 2,
/**
* Test if an exprestion evaluates to Null.
*/
IS_NULL: 3
}
},
/**
* An order on a field.
*
* @property {Object} field
* The field to order by.
*
* This object should have the same structure as [FieldReference]{@link
* google.firestore.v1beta1.FieldReference}
*
* @property {number} direction
* The direction to order by. Defaults to `ASCENDING`.
*
* The number should be among the values of [Direction]{@link
* google.firestore.v1beta1.Direction}
*
* @typedef Order
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.StructuredQuery.Order definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/query.proto}
*/
Order: {
// This is for documentation. Actual contents will be loaded by gRPC.
},
/**
* A reference to a field, such as `max(messages.time) as max_time`.
*
* @property {string} fieldPath
*
* @typedef FieldReference
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.StructuredQuery.FieldReference definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/query.proto}
*/
FieldReference: {
// This is for documentation. Actual contents will be loaded by gRPC.
},
/**
* The projection of document's fields to return.
*
* @property {Object[]} fields
* The fields to return.
*
* If empty, all fields are returned. To only return the name
* of the document, use `['__name__']`.
*
* This object should have the same structure as [FieldReference]{@link
* google.firestore.v1beta1.FieldReference}
*
* @typedef Projection
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.StructuredQuery.Projection definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/query.proto}
*/
Projection: {
// This is for documentation. Actual contents will be loaded by gRPC.
},
/**
* A sort direction.
*
* @enum {number}
* @memberof google.firestore.v1beta1
*/
Direction: {
/**
* Unspecified.
*/
DIRECTION_UNSPECIFIED: 0,
/**
* Ascending.
*/
ASCENDING: 1,
/**
* Descending.
*/
DESCENDING: 2
}
};
/**
* A position in a query result set.
*
* @property {Object[]} values
* The values that represent a position, in the order they appear in
* the order by clause of a query.
*
* Can contain fewer values than specified in the order by clause.
*
* This object should have the same structure as [Value]{@link
* google.firestore.v1beta1.Value}
*
* @property {boolean} before
* If the position is just before or just after the given values, relative
* to the sort order defined by the query.
*
* @typedef Cursor
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.Cursor definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/query.proto}
*/
var Cursor = {
// This is for documentation. Actual contents will be loaded by gRPC.
};

View File

@ -0,0 +1,262 @@
"use strict";
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Note: this file is purely for documentation. Any contents are not expected
// to be loaded as the JS file.
/**
* A write on a document.
*
* @property {Object} update
* A document to write.
*
* This object should have the same structure as [Document]{@link
* google.firestore.v1beta1.Document}
*
* @property {string} delete
* A document name to delete. In the format:
* `projects/{project_id}/databases/{database_id}/documents/{document_path}`.
*
* @property {Object} transform
* Applies a tranformation to a document.
* At most one `transform` per document is allowed in a given request.
* An `update` cannot follow a `transform` on the same document in a given
* request.
*
* This object should have the same structure as [DocumentTransform]{@link
* google.firestore.v1beta1.DocumentTransform}
*
* @property {Object} updateMask
* The fields to update in this write.
*
* This field can be set only when the operation is `update`.
* None of the field paths in the mask may contain a reserved name.
* If the document exists on the server and has fields not referenced in the
* mask, they are left unchanged.
* Fields referenced in the mask, but not present in the input document, are
* deleted from the document on the server.
* The field paths in this mask must not contain a reserved field name.
*
* This object should have the same structure as [DocumentMask]{@link
* google.firestore.v1beta1.DocumentMask}
*
* @property {Object} currentDocument
* An optional precondition on the document.
*
* The write will fail if this is set and not met by the target document.
*
* This object should have the same structure as [Precondition]{@link
* google.firestore.v1beta1.Precondition}
*
* @typedef Write
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.Write definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/write.proto}
*/
var Write = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* A transformation of a document.
*
* @property {string} document
* The name of the document to transform.
*
* @property {Object[]} fieldTransforms
* The list of transformations to apply to the fields of the document, in
* order.
* This must not be empty.
*
* This object should have the same structure as [FieldTransform]{@link
* google.firestore.v1beta1.FieldTransform}
*
* @typedef DocumentTransform
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.DocumentTransform definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/write.proto}
*/
var DocumentTransform = {
// This is for documentation. Actual contents will be loaded by gRPC.
/**
* A transformation of a field of the document.
*
* @property {string} fieldPath
* The path of the field. See Document.fields for the field path syntax
* reference.
*
* @property {number} setToServerValue
* Sets the field to the given server value.
*
* The number should be among the values of [ServerValue]{@link
* google.firestore.v1beta1.ServerValue}
*
* @typedef FieldTransform
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.DocumentTransform.FieldTransform definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/write.proto}
*/
FieldTransform: {
// This is for documentation. Actual contents will be loaded by gRPC.
/**
* A value that is calculated by the server.
*
* @enum {number}
* @memberof google.firestore.v1beta1
*/
ServerValue: {
/**
* Unspecified. This value must not be used.
*/
SERVER_VALUE_UNSPECIFIED: 0,
/**
* The time at which the server processed the request, with millisecond
* precision.
*/
REQUEST_TIME: 1
}
}
};
/**
* The result of applying a write.
*
* @property {Object} updateTime
* The last update time of the document after applying the write. Not set
* after a `delete`.
*
* If the write did not actually change the document, this will be the
* previous update_time.
*
* This object should have the same structure as [Timestamp]{@link
* google.protobuf.Timestamp}
*
* @property {Object[]} transformResults
* The results of applying each DocumentTransform.FieldTransform, in the
* same order.
*
* This object should have the same structure as [Value]{@link
* google.firestore.v1beta1.Value}
*
* @typedef WriteResult
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.WriteResult definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/write.proto}
*/
var WriteResult = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* A Document has changed.
*
* May be the result of multiple writes, including deletes, that
* ultimately resulted in a new value for the Document.
*
* Multiple DocumentChange messages may be returned for the same logical
* change, if multiple targets are affected.
*
* @property {Object} document
* The new state of the Document.
*
* If `mask` is set, contains only fields that were updated or added.
*
* This object should have the same structure as [Document]{@link
* google.firestore.v1beta1.Document}
*
* @property {number[]} targetIds
* A set of target IDs of targets that match this document.
*
* @property {number[]} removedTargetIds
* A set of target IDs for targets that no longer match this document.
*
* @typedef DocumentChange
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.DocumentChange definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/write.proto}
*/
var DocumentChange = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* A Document has been deleted.
*
* May be the result of multiple writes, including updates, the
* last of which deleted the Document.
*
* Multiple DocumentDelete messages may be returned for the same logical
* delete, if multiple targets are affected.
*
* @property {string} document
* The resource name of the Document that was deleted.
*
* @property {number[]} removedTargetIds
* A set of target IDs for targets that previously matched this entity.
*
* @property {Object} readTime
* The read timestamp at which the delete was observed.
*
* Greater or equal to the `commit_time` of the delete.
*
* This object should have the same structure as [Timestamp]{@link
* google.protobuf.Timestamp}
*
* @typedef DocumentDelete
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.DocumentDelete definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/write.proto}
*/
var DocumentDelete = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* A Document has been removed from the view of the targets.
*
* Sent if the document is no longer relevant to a target and is out of view.
* Can be sent instead of a DocumentDelete or a DocumentChange if the server
* can not send the new value of the document.
*
* Multiple DocumentRemove messages may be returned for the same logical
* write or delete, if multiple targets are affected.
*
* @property {string} document
* The resource name of the Document that has gone out of view.
*
* @property {number[]} removedTargetIds
* A set of target IDs for targets that previously matched this document.
*
* @property {Object} readTime
* The read timestamp at which the remove was observed.
*
* Greater or equal to the `commit_time` of the change/delete/remove.
*
* This object should have the same structure as [Timestamp]{@link
* google.protobuf.Timestamp}
*
* @typedef DocumentRemove
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.DocumentRemove definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/write.proto}
*/
var DocumentRemove = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* A digest of all the documents that match a given target.
*
* @property {number} targetId
* The target ID to which this filter applies.
*
* @property {number} count
* The total count of documents that match target_id.
*
* If different from the count of documents in the client that match, the
* client must manually determine which documents no longer match the target.
*
* @typedef ExistenceFilter
* @memberof google.firestore.v1beta1
* @see [google.firestore.v1beta1.ExistenceFilter definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/firestore/v1beta1/write.proto}
*/
var ExistenceFilter = {
// This is for documentation. Actual contents will be loaded by gRPC.
};

View File

@ -0,0 +1,130 @@
"use strict";
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Note: this file is purely for documentation. Any contents are not expected
// to be loaded as the JS file.
/**
* `Any` contains an arbitrary serialized protocol buffer message along with a
* URL that describes the type of the serialized message.
*
* Protobuf library provides support to pack/unpack Any values in the form
* of utility functions or additional generated methods of the Any type.
*
* Example 1: Pack and unpack a message in C++.
*
* Foo foo = ...;
* Any any;
* any.PackFrom(foo);
* ...
* if (any.UnpackTo(&foo)) {
* ...
* }
*
* Example 2: Pack and unpack a message in Java.
*
* Foo foo = ...;
* Any any = Any.pack(foo);
* ...
* if (any.is(Foo.class)) {
* foo = any.unpack(Foo.class);
* }
*
* Example 3: Pack and unpack a message in Python.
*
* foo = Foo(...)
* any = Any()
* any.Pack(foo)
* ...
* if any.Is(Foo.DESCRIPTOR):
* any.Unpack(foo)
* ...
*
* Example 4: Pack and unpack a message in Go
*
* foo := &pb.Foo{...}
* any, err := ptypes.MarshalAny(foo)
* ...
* foo := &pb.Foo{}
* if err := ptypes.UnmarshalAny(any, foo); err != nil {
* ...
* }
*
* The pack methods provided by protobuf library will by default use
* 'type.googleapis.com/full.type.name' as the type URL and the unpack
* methods only use the fully qualified type name after the last '/'
* in the type URL, for example "foo.bar.com/x/y.z" will yield type
* name "y.z".
*
*
* # JSON
*
* The JSON representation of an `Any` value uses the regular
* representation of the deserialized, embedded message, with an
* additional field `@type` which contains the type URL. Example:
*
* package google.profile;
* message Person {
* string first_name = 1;
* string last_name = 2;
* }
*
* {
* "@type": "type.googleapis.com/google.profile.Person",
* "firstName": <string>,
* "lastName": <string>
* }
*
* If the embedded message type is well-known and has a custom JSON
* representation, that representation will be embedded adding a field
* `value` which holds the custom JSON in addition to the `@type`
* field. Example (for message google.protobuf.Duration):
*
* {
* "@type": "type.googleapis.com/google.protobuf.Duration",
* "value": "1.212s"
* }
*
* @property {string} typeUrl
* A URL/resource name whose content describes the type of the
* serialized protocol buffer message.
*
* For URLs which use the scheme `http`, `https`, or no scheme, the
* following restrictions and interpretations apply:
*
* * If no scheme is provided, `https` is assumed.
* * The last segment of the URL's path must represent the fully
* qualified name of the type (as in `path/google.protobuf.Duration`).
* The name should be in a canonical form (e.g., leading "." is
* not accepted).
* * An HTTP GET on the URL must yield a google.protobuf.Type
* value in binary format, or produce an error.
* * Applications are allowed to cache lookup results based on the
* URL, or have them precompiled into a binary to avoid any
* lookup. Therefore, binary compatibility needs to be preserved
* on changes to types. (Use versioned type names to manage
* breaking changes.)
*
* Schemes other than `http`, `https` (or the empty scheme) might be
* used with implementation specific semantics.
*
* @property {string} value
* Must be a valid serialized protocol buffer of the above specified type.
*
* @typedef Any
* @memberof google.protobuf
* @see [google.protobuf.Any definition in proto format]{@link https://github.com/google/protobuf/blob/master/src/google/protobuf/any.proto}
*/
var Any = {
// This is for documentation. Actual contents will be loaded by gRPC.
};

View File

@ -0,0 +1,33 @@
"use strict";
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Note: this file is purely for documentation. Any contents are not expected
// to be loaded as the JS file.
/**
* A generic empty message that you can re-use to avoid defining duplicated
* empty messages in your APIs. A typical example is to use it as the request
* or the response type of an API method. For instance:
*
* service Foo {
* rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
* }
*
* The JSON representation for `Empty` is empty JSON object `{}`.
* @typedef Empty
* @memberof google.protobuf
* @see [google.protobuf.Empty definition in proto format]{@link https://github.com/google/protobuf/blob/master/src/google/protobuf/empty.proto}
*/
var Empty = {
// This is for documentation. Actual contents will be loaded by gRPC.
};

View File

@ -0,0 +1,115 @@
"use strict";
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Note: this file is purely for documentation. Any contents are not expected
// to be loaded as the JS file.
/**
* A Timestamp represents a point in time independent of any time zone
* or calendar, represented as seconds and fractions of seconds at
* nanosecond resolution in UTC Epoch time. It is encoded using the
* Proleptic Gregorian Calendar which extends the Gregorian calendar
* backwards to year one. It is encoded assuming all minutes are 60
* seconds long, i.e. leap seconds are "smeared" so that no leap second
* table is needed for interpretation. Range is from
* 0001-01-01T00:00:00Z to 9999-12-31T23:59:59.999999999Z.
* By restricting to that range, we ensure that we can convert to
* and from RFC 3339 date strings.
* See
* [https://www.ietf.org/rfc/rfc3339.txt](https://www.ietf.org/rfc/rfc3339.txt).
*
* # Examples
*
* Example 1: Compute Timestamp from POSIX `time()`.
*
* Timestamp timestamp;
* timestamp.set_seconds(time(NULL));
* timestamp.set_nanos(0);
*
* Example 2: Compute Timestamp from POSIX `gettimeofday()`.
*
* struct timeval tv;
* gettimeofday(&tv, NULL);
*
* Timestamp timestamp;
* timestamp.set_seconds(tv.tv_sec);
* timestamp.set_nanos(tv.tv_usec * 1000);
*
* Example 3: Compute Timestamp from Win32 `GetSystemTimeAsFileTime()`.
*
* FILETIME ft;
* GetSystemTimeAsFileTime(&ft);
* UINT64 ticks = (((UINT64)ft.dwHighDateTime) << 32) | ft.dwLowDateTime;
*
* // A Windows tick is 100 nanoseconds. Windows epoch 1601-01-01T00:00:00Z
* // is 11644473600 seconds before Unix epoch 1970-01-01T00:00:00Z.
* Timestamp timestamp;
* timestamp.set_seconds((INT64) ((ticks / 10000000) - 11644473600LL));
* timestamp.set_nanos((INT32) ((ticks % 10000000) * 100));
*
* Example 4: Compute Timestamp from Java `System.currentTimeMillis()`.
*
* long millis = System.currentTimeMillis();
*
* Timestamp timestamp = Timestamp.newBuilder().setSeconds(millis / 1000)
* .setNanos((int) ((millis % 1000) * 1000000)).build();
*
*
* Example 5: Compute Timestamp from current time in Python.
*
* timestamp = Timestamp()
* timestamp.GetCurrentTime()
*
* # JSON Mapping
*
* In JSON format, the Timestamp type is encoded as a string in the
* [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format. That is, the
* format is "{year}-{month}-{day}T{hour}:{min}:{sec}[.{frac_sec}]Z"
* where {year} is always expressed using four digits while {month}, {day},
* {hour}, {min}, and {sec} are zero-padded to two digits each. The fractional
* seconds, which can go up to 9 digits (i.e. up to 1 nanosecond resolution),
* are optional. The "Z" suffix indicates the timezone ("UTC"); the timezone
* is required, though only UTC (as indicated by "Z") is presently supported.
*
* For example, "2017-01-15T01:30:15.01Z" encodes 15.01 seconds past
* 01:30 UTC on January 15, 2017.
*
* In JavaScript, one can convert a Date object to this format using the
* standard
* [toISOString()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/toISOString]
* method. In Python, a standard `datetime.datetime` object can be converted
* to this format using
* [`strftime`](https://docs.python.org/2/library/time.html#time.strftime) with
* the time format spec '%Y-%m-%dT%H:%M:%S.%fZ'. Likewise, in Java, one can use
* the Joda Time's [`ISODateTimeFormat.dateTime()`](https://cloud.google.com
* http://joda-time.sourceforge.net/apidocs/org/joda/time/format/ISODateTimeFormat.html#dateTime())
* to obtain a formatter capable of generating timestamps in this format.
*
* @property {number} seconds
* Represents seconds of UTC time since Unix epoch
* 1970-01-01T00:00:00Z. Must be from 0001-01-01T00:00:00Z to
* 9999-12-31T23:59:59Z inclusive.
*
* @property {number} nanos
* Non-negative fractions of a second at nanosecond resolution. Negative
* second values with fractions must still have non-negative nanos values
* that count forward in time. Must be from 0 to 999,999,999
* inclusive.
*
* @typedef Timestamp
* @memberof google.protobuf
* @see [google.protobuf.Timestamp definition in proto format]{@link https://github.com/google/protobuf/blob/master/src/google/protobuf/timestamp.proto}
*/
var Timestamp = {
// This is for documentation. Actual contents will be loaded by gRPC.
};

View File

@ -0,0 +1,151 @@
"use strict";
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Note: this file is purely for documentation. Any contents are not expected
// to be loaded as the JS file.
/**
* Wrapper message for `double`.
*
* The JSON representation for `DoubleValue` is JSON number.
*
* @property {number} value
* The double value.
*
* @typedef DoubleValue
* @memberof google.protobuf
* @see [google.protobuf.DoubleValue definition in proto format]{@link https://github.com/google/protobuf/blob/master/src/google/protobuf/wrappers.proto}
*/
var DoubleValue = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* Wrapper message for `float`.
*
* The JSON representation for `FloatValue` is JSON number.
*
* @property {number} value
* The float value.
*
* @typedef FloatValue
* @memberof google.protobuf
* @see [google.protobuf.FloatValue definition in proto format]{@link https://github.com/google/protobuf/blob/master/src/google/protobuf/wrappers.proto}
*/
var FloatValue = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* Wrapper message for `int64`.
*
* The JSON representation for `Int64Value` is JSON string.
*
* @property {number} value
* The int64 value.
*
* @typedef Int64Value
* @memberof google.protobuf
* @see [google.protobuf.Int64Value definition in proto format]{@link https://github.com/google/protobuf/blob/master/src/google/protobuf/wrappers.proto}
*/
var Int64Value = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* Wrapper message for `uint64`.
*
* The JSON representation for `UInt64Value` is JSON string.
*
* @property {number} value
* The uint64 value.
*
* @typedef UInt64Value
* @memberof google.protobuf
* @see [google.protobuf.UInt64Value definition in proto format]{@link https://github.com/google/protobuf/blob/master/src/google/protobuf/wrappers.proto}
*/
var UInt64Value = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* Wrapper message for `int32`.
*
* The JSON representation for `Int32Value` is JSON number.
*
* @property {number} value
* The int32 value.
*
* @typedef Int32Value
* @memberof google.protobuf
* @see [google.protobuf.Int32Value definition in proto format]{@link https://github.com/google/protobuf/blob/master/src/google/protobuf/wrappers.proto}
*/
var Int32Value = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* Wrapper message for `uint32`.
*
* The JSON representation for `UInt32Value` is JSON number.
*
* @property {number} value
* The uint32 value.
*
* @typedef UInt32Value
* @memberof google.protobuf
* @see [google.protobuf.UInt32Value definition in proto format]{@link https://github.com/google/protobuf/blob/master/src/google/protobuf/wrappers.proto}
*/
var UInt32Value = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* Wrapper message for `bool`.
*
* The JSON representation for `BoolValue` is JSON `true` and `false`.
*
* @property {boolean} value
* The bool value.
*
* @typedef BoolValue
* @memberof google.protobuf
* @see [google.protobuf.BoolValue definition in proto format]{@link https://github.com/google/protobuf/blob/master/src/google/protobuf/wrappers.proto}
*/
var BoolValue = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* Wrapper message for `string`.
*
* The JSON representation for `StringValue` is JSON string.
*
* @property {string} value
* The string value.
*
* @typedef StringValue
* @memberof google.protobuf
* @see [google.protobuf.StringValue definition in proto format]{@link https://github.com/google/protobuf/blob/master/src/google/protobuf/wrappers.proto}
*/
var StringValue = {
// This is for documentation. Actual contents will be loaded by gRPC.
};
/**
* Wrapper message for `bytes`.
*
* The JSON representation for `BytesValue` is JSON string.
*
* @property {string} value
* The bytes value.
*
* @typedef BytesValue
* @memberof google.protobuf
* @see [google.protobuf.BytesValue definition in proto format]{@link https://github.com/google/protobuf/blob/master/src/google/protobuf/wrappers.proto}
*/
var BytesValue = {
// This is for documentation. Actual contents will be loaded by gRPC.
};

View File

@ -0,0 +1,92 @@
"use strict";
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Note: this file is purely for documentation. Any contents are not expected
// to be loaded as the JS file.
/**
* The `Status` type defines a logical error model that is suitable for
* different programming environments, including REST APIs and RPC APIs. It is
* used by [gRPC](https://github.com/grpc). The error model is designed to be:
*
* - Simple to use and understand for most users
* - Flexible enough to meet unexpected needs
*
* # Overview
*
* The `Status` message contains three pieces of data: error code, error
* message, and error details. The error code should be an enum value of
* google.rpc.Code, but it may accept additional error codes if needed. The
* error message should be a developer-facing English message that helps
* developers *understand* and *resolve* the error. If a localized user-facing
* error message is needed, put the localized message in the error details or
* localize it in the client. The optional error details may contain arbitrary
* information about the error. There is a predefined set of error detail types
* in the package `google.rpc` that can be used for common error conditions.
*
* # Language mapping
*
* The `Status` message is the logical representation of the error model, but it
* is not necessarily the actual wire format. When the `Status` message is
* exposed in different client libraries and different wire protocols, it can be
* mapped differently. For example, it will likely be mapped to some exceptions
* in Java, but more likely mapped to some error codes in C.
*
* # Other uses
*
* The error model and the `Status` message can be used in a variety of
* environments, either with or without APIs, to provide a
* consistent developer experience across different environments.
*
* Example uses of this error model include:
*
* - Partial errors. If a service needs to return partial errors to the client,
* it may embed the `Status` in the normal response to indicate the partial
* errors.
*
* - Workflow errors. A typical workflow has multiple steps. Each step may
* have a `Status` message for error reporting.
*
* - Batch operations. If a client uses batch request and batch response, the
* `Status` message should be used directly inside batch response, one for
* each error sub-response.
*
* - Asynchronous operations. If an API call embeds asynchronous operation
* results in its response, the status of those operations should be
* represented directly using the `Status` message.
*
* - Logging. If some API errors are stored in logs, the message `Status` could
* be used directly after any stripping needed for security/privacy reasons.
*
* @property {number} code
* The status code, which should be an enum value of google.rpc.Code.
*
* @property {string} message
* A developer-facing error message, which should be in English. Any
* user-facing error message should be localized and sent in the
* google.rpc.Status.details field, or localized by the client.
*
* @property {Object[]} details
* A list of messages that carry the error details. There is a common set of
* message types for APIs to use.
*
* This object should have the same structure as [Any]{@link
* google.protobuf.Any}
*
* @typedef Status
* @memberof google.rpc
* @see [google.rpc.Status definition in proto format]{@link https://github.com/googleapis/googleapis/blob/master/google/rpc/status.proto}
*/
var Status = {
// This is for documentation. Actual contents will be loaded by gRPC.
};

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,100 @@
{
"interfaces": {
"google.firestore.v1beta1.Firestore": {
"retry_codes": {
"idempotent": [
"DEADLINE_EXCEEDED",
"UNAVAILABLE"
],
"non_idempotent": []
},
"retry_params": {
"default": {
"initial_retry_delay_millis": 100,
"retry_delay_multiplier": 1.3,
"max_retry_delay_millis": 60000,
"initial_rpc_timeout_millis": 20000,
"rpc_timeout_multiplier": 1.0,
"max_rpc_timeout_millis": 20000,
"total_timeout_millis": 600000
},
"streaming": {
"initial_retry_delay_millis": 100,
"retry_delay_multiplier": 1.3,
"max_retry_delay_millis": 60000,
"initial_rpc_timeout_millis": 300000,
"rpc_timeout_multiplier": 1.0,
"max_rpc_timeout_millis": 86400000,
"total_timeout_millis": 86400000
}
},
"methods": {
"GetDocument": {
"timeout_millis": 60000,
"retry_codes_name": "idempotent",
"retry_params_name": "default"
},
"ListDocuments": {
"timeout_millis": 60000,
"retry_codes_name": "idempotent",
"retry_params_name": "default"
},
"CreateDocument": {
"timeout_millis": 60000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default"
},
"UpdateDocument": {
"timeout_millis": 60000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default"
},
"DeleteDocument": {
"timeout_millis": 60000,
"retry_codes_name": "idempotent",
"retry_params_name": "default"
},
"BatchGetDocuments": {
"timeout_millis": 86400000,
"retry_codes_name": "idempotent",
"retry_params_name": "streaming"
},
"BeginTransaction": {
"timeout_millis": 60000,
"retry_codes_name": "idempotent",
"retry_params_name": "default"
},
"Commit": {
"timeout_millis": 60000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default"
},
"Rollback": {
"timeout_millis": 60000,
"retry_codes_name": "idempotent",
"retry_params_name": "default"
},
"RunQuery": {
"timeout_millis": 86400000,
"retry_codes_name": "idempotent",
"retry_params_name": "default"
},
"Write": {
"timeout_millis": 300000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "streaming"
},
"Listen": {
"timeout_millis": 86400000,
"retry_codes_name": "idempotent",
"retry_params_name": "streaming"
},
"ListCollectionIds": {
"timeout_millis": 60000,
"retry_codes_name": "idempotent",
"retry_params_name": "default"
}
}
}
}
}

View File

@ -0,0 +1,29 @@
/*
* Copyright 2017, Google Inc. All rights reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
'use strict';
var firestoreClient = require('./firestore_client');
var gax = require('google-gax');
var extend = require('extend');
function v1beta1(options) {
options = extend({
scopes: v1beta1.ALL_SCOPES,
}, options);
var gaxGrpc = new gax.GrpcClient(options);
return new firestoreClient(gaxGrpc);
}
v1beta1.SERVICE_ADDRESS = firestoreClient.SERVICE_ADDRESS;
v1beta1.ALL_SCOPES = firestoreClient.ALL_SCOPES;
module.exports = v1beta1;

View File

@ -0,0 +1,184 @@
/*!
* Copyright 2017 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
'use strict';
var __importStar = (this && this.__importStar) || function (mod) {
if (mod && mod.__esModule) return mod;
var result = {};
if (mod != null) for (var k in mod) if (Object.hasOwnProperty.call(mod, k)) result[k] = mod[k];
result["default"] = mod;
return result;
};
Object.defineProperty(exports, "__esModule", { value: true });
const is = __importStar(require("is"));
/**
* Formats the given word as plural conditionally given the preceding number.
*
* @private
*/
function formatPlural(num, str) {
return `${num} ${str}` + (num === 1 ? '' : 's');
}
/**
* Provides argument validation for the Firestore Public API. Exposes validators
* for strings, integers, numbers, objects and functions by default and can be
* extended to provide custom validators.
*
* The exported validation functions follow the naming convention is{Type} and
* isOptional{Type}, such as "isString" and "isOptionalString".
*
* To register custom validators, invoke the constructor with with a mapping
* from type names to validation functions. Validation functions return 'true'
* for valid inputs and may throw errors with custom validation messages for
* easier diagnosis.
*
* @private
*/
class Validator {
/**
* Create a new Validator, optionally registering the custom validators as
* provided.
*
* @param customValidators A list of custom validators to register.
*/
constructor(customValidators) {
const validators = Object.assign({
function: is.function,
integer: (value, min, max) => {
min = is.defined(min) ? min : -Infinity;
max = is.defined(max) ? max : Infinity;
if (!is.integer(value)) {
return false;
}
if (value < min || value > max) {
throw new Error(`Value must be within [${min}, ${max}] inclusive, but was: ${value}`);
}
return true;
},
number: (value, min, max) => {
min = is.defined(min) ? min : -Infinity;
max = is.defined(max) ? max : Infinity;
if (!is.number(value) || is.nan(value)) {
return false;
}
if (value < min || value > max) {
throw new Error(`Value must be within [${min}, ${max}] inclusive, but was: ${value}`);
}
return true;
},
object: is.object,
string: is.string,
boolean: is.boolean
}, customValidators);
const register = type => {
const camelCase = type.substring(0, 1).toUpperCase() + type.substring(1);
this[`is${camelCase}`] = (argumentName, ...values) => {
let valid = false;
let message = is.number(argumentName) ?
`Argument at index ${argumentName} is not a valid ${type}.` :
`Argument "${argumentName}" is not a valid ${type}.`;
try {
valid = validators[type].call(null, ...values);
}
catch (err) {
message += ` ${err.message}`;
}
if (valid !== true) {
throw new Error(message);
}
};
this[`isOptional${camelCase}`] = function (argumentName, value) {
if (is.defined(value)) {
this[`is${camelCase}`].apply(null, arguments);
}
};
};
for (const type in validators) {
if (validators.hasOwnProperty(type)) {
register(type);
}
}
}
/**
* Verifies that 'args' has at least 'minSize' elements.
*
* @param {string} funcName - The function name to use in the error message.
* @param {Array.<*>} args - The array (or array-like structure) to verify.
* @param {number} minSize - The minimum number of elements to enforce.
* @throws if the expectation is not met.
* @returns {boolean} 'true' when the minimum number of elements is available.
*/
minNumberOfArguments(funcName, args, minSize) {
if (args.length < minSize) {
throw new Error(`Function '${funcName}()' requires at least ` +
`${formatPlural(minSize, 'argument')}.`);
}
return true;
}
/**
* Verifies that 'args' has at most 'maxSize' elements.
*
* @param {string} funcName - The function name to use in the error message.
* @param {Array.<*>} args - The array (or array-like structure) to verify.
* @param {number} maxSize - The maximum number of elements to enforce.
* @throws if the expectation is not met.
* @returns {boolean} 'true' when only the maximum number of elements is
* specified.
*/
maxNumberOfArguments(funcName, args, maxSize) {
if (args.length > maxSize) {
throw new Error(`Function '${funcName}()' accepts at most ` +
`${formatPlural(maxSize, 'argument')}.`);
}
return true;
}
}
exports.Validator = Validator;
function customObjectError(val) {
if (is.object(val) && val.constructor.name !== 'Object') {
const typeName = val.constructor.name;
switch (typeName) {
case 'DocumentReference':
case 'FieldPath':
case 'FieldValue':
case 'GeoPoint':
case 'Timestamp':
return new Error(`Detected an object of type "${typeName}" that doesn't match the ` +
'expected instance. Please ensure that the Firestore types you ' +
'are using are from the same NPM package.');
default:
return new Error(`Couldn't serialize object of type "${typeName}". Firestore ` +
'doesn\'t support JavaScript objects with custom prototypes ' +
'(i.e. objects that were created via the \'new\' operator).');
}
}
else {
return new Error(`Invalid use of type "${typeof val}" as a Firestore argument.`);
}
}
exports.customObjectError = customObjectError;
/**
* Create a new Validator, optionally registering the custom validators as
* provided.
*
* @private
* @param customValidators A list of custom validators to register.
*/
function createValidator(customValidators) {
// This function exists to change the type of `Validator` to `any` so that
// consumers can call the custom validator functions.
return new Validator(customValidators);
}
exports.createValidator = createValidator;

View File

@ -0,0 +1,664 @@
/*!
* Copyright 2017 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
'use strict';
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
const assert_1 = __importDefault(require("assert"));
const functional_red_black_tree_1 = __importDefault(require("functional-red-black-tree"));
const through2_1 = __importDefault(require("through2"));
const logger_1 = require("./logger");
const backoff_1 = require("./backoff");
const timestamp_1 = require("./timestamp");
const path_1 = require("./path");
const util_1 = require("./util");
const document_1 = require("./document");
const document_change_1 = require("./document-change");
/*!
* Target ID used by watch. Watch uses a fixed target id since we only support
* one target per stream.
*
* @private
* @type {number}
*/
const WATCH_TARGET_ID = 0x1;
/*!
* The change type for document change events.
*/
const ChangeType = {
added: 'added',
modified: 'modified',
removed: 'removed',
};
/*!
* List of GRPC Error Codes.
*
* This corresponds to
* {@link https://github.com/grpc/grpc/blob/master/doc/statuscodes.md}.
*/
const GRPC_STATUS_CODE = {
// Not an error; returned on success.
OK: 0,
// The operation was cancelled (typically by the caller).
CANCELLED: 1,
// Unknown error. An example of where this error may be returned is if a
// Status value received from another address space belongs to an error-space
// that is not known in this address space. Also errors raised by APIs that
// do not return enough error information may be converted to this error.
UNKNOWN: 2,
// Client specified an invalid argument. Note that this differs from
// FAILED_PRECONDITION. INVALID_ARGUMENT indicates arguments that are
// problematic regardless of the state of the system (e.g., a malformed file
// name).
INVALID_ARGUMENT: 3,
// Deadline expired before operation could complete. For operations that
// change the state of the system, this error may be returned even if the
// operation has completed successfully. For example, a successful response
// from a server could have been delayed long enough for the deadline to
// expire.
DEADLINE_EXCEEDED: 4,
// Some requested entity (e.g., file or directory) was not found.
NOT_FOUND: 5,
// Some entity that we attempted to create (e.g., file or directory) already
// exists.
ALREADY_EXISTS: 6,
// The caller does not have permission to execute the specified operation.
// PERMISSION_DENIED must not be used for rejections caused by exhausting
// some resource (use RESOURCE_EXHAUSTED instead for those errors).
// PERMISSION_DENIED must not be used if the caller can not be identified
// (use UNAUTHENTICATED instead for those errors).
PERMISSION_DENIED: 7,
// The request does not have valid authentication credentials for the
// operation.
UNAUTHENTICATED: 16,
// Some resource has been exhausted, perhaps a per-user quota, or perhaps the
// entire file system is out of space.
RESOURCE_EXHAUSTED: 8,
// Operation was rejected because the system is not in a state required for
// the operation's execution. For example, directory to be deleted may be
// non-empty, an rmdir operation is applied to a non-directory, etc.
//
// A litmus test that may help a service implementor in deciding
// between FAILED_PRECONDITION, ABORTED, and UNAVAILABLE:
// (a) Use UNAVAILABLE if the client can retry just the failing call.
// (b) Use ABORTED if the client should retry at a higher-level
// (e.g., restarting a read-modify-write sequence).
// (c) Use FAILED_PRECONDITION if the client should not retry until
// the system state has been explicitly fixed. E.g., if an "rmdir"
// fails because the directory is non-empty, FAILED_PRECONDITION
// should be returned since the client should not retry unless
// they have first fixed up the directory by deleting files from it.
// (d) Use FAILED_PRECONDITION if the client performs conditional
// REST Get/Update/Delete on a resource and the resource on the
// server does not match the condition. E.g., conflicting
// read-modify-write on the same resource.
FAILED_PRECONDITION: 9,
// The operation was aborted, typically due to a concurrency issue like
// sequencer check failures, transaction aborts, etc.
//
// See litmus test above for deciding between FAILED_PRECONDITION, ABORTED,
// and UNAVAILABLE.
ABORTED: 10,
// Operation was attempted past the valid range. E.g., seeking or reading
// past end of file.
//
// Unlike INVALID_ARGUMENT, this error indicates a problem that may be fixed
// if the system state changes. For example, a 32-bit file system will
// generate INVALID_ARGUMENT if asked to read at an offset that is not in the
// range [0,2^32-1], but it will generate OUT_OF_RANGE if asked to read from
// an offset past the current file size.
//
// There is a fair bit of overlap between FAILED_PRECONDITION and
// OUT_OF_RANGE. We recommend using OUT_OF_RANGE (the more specific error)
// when it applies so that callers who are iterating through a space can
// easily look for an OUT_OF_RANGE error to detect when they are done.
OUT_OF_RANGE: 11,
// Operation is not implemented or not supported/enabled in this service.
UNIMPLEMENTED: 12,
// Internal errors. Means some invariants expected by underlying System has
// been broken. If you see one of these errors, Something is very broken.
INTERNAL: 13,
// The service is currently unavailable. This is a most likely a transient
// condition and may be corrected by retrying with a backoff.
//
// See litmus test above for deciding between FAILED_PRECONDITION, ABORTED,
// and UNAVAILABLE.
UNAVAILABLE: 14,
// Unrecoverable data loss or corruption.
DATA_LOSS: 15,
// Force users to include a default branch:
DO_NOT_USE: -1,
};
/*!
* The comparator used for document watches (which should always get called with
* the same document).
*/
const DOCUMENT_WATCH_COMPARATOR = (doc1, doc2) => {
assert_1.default(doc1 === doc2, 'Document watches only support one document.');
return 0;
};
/**
* @private
* @callback docsCallback
* @returns {Array.<QueryDocumentSnapshot>} An ordered list of documents.
*/
/**
* @private
* @callback changeCallback
* @returns {Array.<DocumentChange>} An ordered list of document
* changes.
*/
/**
* onSnapshot() callback that receives the updated query state.
*
* @private
* @callback watchSnapshotCallback
*
* @param {Timestamp} readTime - The time at which this snapshot was obtained.
* @param {number} size - The number of documents in the result set.
* @param {docsCallback} docs - A callback that returns the ordered list of
* documents stored in this snapshot.
* @param {changeCallback} changes - A callback that returns the list of
* changed documents since the last snapshot delivered for this watch.
*/
/**
* Watch provides listen functionality and exposes the 'onSnapshot' observer. It
* can be used with a valid Firestore Listen target.
*
* @class
* @private
*/
class Watch {
/**
* @private
* @hideconstructor
*
* @param {Firestore} firestore The Firestore Database client.
* @param {Object} target - A Firestore 'Target' proto denoting the target to
* listen on.
* @param {function} comparator - A comparator for QueryDocumentSnapshots that
* is used to order the document snapshots returned by this watch.
*/
constructor(firestore, target, comparator) {
this._firestore = firestore;
this._targets = target;
this._comparator = comparator;
this._backoff = new backoff_1.ExponentialBackoff();
this._requestTag = util_1.requestTag();
}
/**
* Creates a new Watch instance to listen on DocumentReferences.
*
* @private
* @param {DocumentReference} documentRef - The document
* reference for this watch.
* @returns {Watch} A newly created Watch instance.
*/
static forDocument(documentRef) {
return new Watch(documentRef.firestore, {
documents: {
documents: [documentRef.formattedName],
},
targetId: WATCH_TARGET_ID,
}, DOCUMENT_WATCH_COMPARATOR);
}
/**
* Creates a new Watch instance to listen on Queries.
*
* @private
* @param {Query} query - The query used for this watch.
* @returns {Watch} A newly created Watch instance.
*/
static forQuery(query) {
return new Watch(query.firestore, {
query: query.toProto(),
targetId: WATCH_TARGET_ID,
}, query.comparator());
}
/**
* Determines whether an error is considered permanent and should not be
* retried. Errors that don't provide a GRPC error code are always considered
* transient in this context.
*
* @private
* @param {Error} error An error object.
* @return {boolean} Whether the error is permanent.
*/
isPermanentError(error) {
if (error.code === undefined) {
logger_1.logger('Watch.onSnapshot', this._requestTag, 'Unable to determine error code: ', error);
return false;
}
switch (error.code) {
case GRPC_STATUS_CODE.CANCELLED:
case GRPC_STATUS_CODE.UNKNOWN:
case GRPC_STATUS_CODE.DEADLINE_EXCEEDED:
case GRPC_STATUS_CODE.RESOURCE_EXHAUSTED:
case GRPC_STATUS_CODE.INTERNAL:
case GRPC_STATUS_CODE.UNAVAILABLE:
case GRPC_STATUS_CODE.UNAUTHENTICATED:
return false;
default:
return true;
}
}
/**
* Determines whether we need to initiate a longer backoff due to system
* overload.
*
* @private
* @param {Error} error A GRPC Error object that exposes an error code.
* @return {boolean} Whether we need to back off our retries.
*/
isResourceExhaustedError(error) {
return error.code === GRPC_STATUS_CODE.RESOURCE_EXHAUSTED;
}
/**
* Starts a watch and attaches a listener for document change events.
*
* @private
* @param {watchSnapshotCallback} onNext - A callback to be called every time
* a new snapshot is available.
* @param {function(Error)} onError - A callback to be called if the listen
* fails or is cancelled. No further callbacks will occur.
*
* @returns {function()} An unsubscribe function that can be called to cancel
* the snapshot listener.
*/
onSnapshot(onNext, onError) {
let self = this;
// The sorted tree of QueryDocumentSnapshots as sent in the last snapshot.
// We only look at the keys.
let docTree = functional_red_black_tree_1.default(this._comparator);
// A map of document names to QueryDocumentSnapshots for the last sent
// snapshot.
let docMap = new Map();
// The accumulates map of document changes (keyed by document name) for the
// current snapshot.
let changeMap = new Map();
// The current state of the query results.
let current = false;
// We need this to track whether we've pushed an initial set of changes,
// since we should push those even when there are no changes, if there \
// aren't docs.
let hasPushed = false;
// The server assigns and updates the resume token.
let resumeToken = undefined;
// Indicates whether we are interested in data from the stream. Set to false
// in the 'unsubscribe()' callback.
let isActive = true;
// Sentinel value for a document remove.
const REMOVED = {};
const request = {
database: this._firestore.formattedName,
addTarget: this._targets,
};
// We may need to replace the underlying stream on reset events.
// This is the one that will be returned and proxy the current one.
const stream = through2_1.default.obj();
// The current stream to the backend.
let currentStream = null;
/** Helper to clear the docs on RESET or filter mismatch. */
const resetDocs = function () {
logger_1.logger('Watch.onSnapshot', self._requestTag, 'Resetting documents');
changeMap.clear();
resumeToken = undefined;
docTree.forEach(snapshot => {
// Mark each document as deleted. If documents are not deleted, they
// will be send again by the server.
changeMap.set(snapshot.ref.formattedName, REMOVED);
});
current = false;
};
/** Closes the stream and calls onError() if the stream is still active. */
const closeStream = function (err) {
if (currentStream) {
currentStream.unpipe(stream);
currentStream.end();
currentStream = null;
}
stream.end();
if (isActive) {
isActive = false;
logger_1.logger('Watch.onSnapshot', self._requestTag, 'Invoking onError: ', err);
onError(err);
}
};
/**
* Re-opens the stream unless the specified error is considered permanent.
* Clears the change map.
*/
const maybeReopenStream = function (err) {
if (isActive && !self.isPermanentError(err)) {
logger_1.logger('Watch.onSnapshot', self._requestTag, 'Stream ended, re-opening after retryable error: ', err);
request.addTarget.resumeToken = resumeToken;
changeMap.clear();
if (self.isResourceExhaustedError(err)) {
self._backoff.resetToMax();
}
resetStream();
}
else {
closeStream(err);
}
};
/** Helper to restart the outgoing stream to the backend. */
const resetStream = function () {
logger_1.logger('Watch.onSnapshot', self._requestTag, 'Opening new stream');
if (currentStream) {
currentStream.unpipe(stream);
currentStream.end();
currentStream = null;
initStream();
}
};
/**
* Initializes a new stream to the backend with backoff.
*/
const initStream = function () {
self._backoff.backoffAndWait().then(() => {
if (!isActive) {
logger_1.logger('Watch.onSnapshot', self._requestTag, 'Not initializing inactive stream');
return;
}
// Note that we need to call the internal _listen API to pass additional
// header values in readWriteStream.
self._firestore
.readWriteStream('listen', request, self._requestTag, true)
.then(backendStream => {
if (!isActive) {
logger_1.logger('Watch.onSnapshot', self._requestTag, 'Closing inactive stream');
backendStream.end();
return;
}
logger_1.logger('Watch.onSnapshot', self._requestTag, 'Opened new stream');
currentStream = backendStream;
currentStream.on('error', err => {
maybeReopenStream(err);
});
currentStream.on('end', () => {
const err = new Error('Stream ended unexpectedly');
err.code = GRPC_STATUS_CODE.UNKNOWN;
maybeReopenStream(err);
});
currentStream.pipe(stream);
currentStream.resume();
})
.catch(closeStream);
});
};
/**
* Checks if the current target id is included in the list of target ids.
* If no targetIds are provided, returns true.
*/
const affectsTarget = function (targetIds, currentId) {
if (targetIds === undefined || targetIds.length === 0) {
return true;
}
for (let targetId of targetIds) {
if (targetId === currentId) {
return true;
}
}
return false;
};
/** Splits up document changes into removals, additions, and updates. */
const extractChanges = function (docMap, changes, readTime) {
let deletes = [];
let adds = [];
let updates = [];
changes.forEach((value, name) => {
if (value === REMOVED) {
if (docMap.has(name)) {
deletes.push(name);
}
}
else if (docMap.has(name)) {
value.readTime = readTime;
updates.push(value.build());
}
else {
value.readTime = readTime;
adds.push(value.build());
}
});
return { deletes, adds, updates };
};
/**
* Applies the mutations in changeMap to both the document tree and the
* document lookup map. Modified docMap in-place and returns the updated
* state.
*/
const computeSnapshot = function (docTree, docMap, changes) {
let updatedTree = docTree;
let updatedMap = docMap;
assert_1.default(docTree.length === docMap.size, 'The document tree and document ' +
'map should have the same number of entries.');
/**
* Applies a document delete to the document tree and the document map.
* Returns the corresponding DocumentChange event.
*/
const deleteDoc = function (name) {
assert_1.default(updatedMap.has(name), 'Document to delete does not exist');
let oldDocument = updatedMap.get(name);
let existing = updatedTree.find(oldDocument);
let oldIndex = existing.index;
updatedTree = existing.remove();
updatedMap.delete(name);
return new document_change_1.DocumentChange(ChangeType.removed, oldDocument, oldIndex, -1);
};
/**
* Applies a document add to the document tree and the document map.
* Returns the corresponding DocumentChange event.
*/
const addDoc = function (newDocument) {
let name = newDocument.ref.formattedName;
assert_1.default(!updatedMap.has(name), 'Document to add already exists');
updatedTree = updatedTree.insert(newDocument, null);
let newIndex = updatedTree.find(newDocument).index;
updatedMap.set(name, newDocument);
return new document_change_1.DocumentChange(ChangeType.added, newDocument, -1, newIndex);
};
/**
* Applies a document modification to the document tree and the document
* map. Returns the DocumentChange event for successful modifications.
*/
const modifyDoc = function (newDocument) {
let name = newDocument.ref.formattedName;
assert_1.default(updatedMap.has(name), 'Document to modify does not exist');
let oldDocument = updatedMap.get(name);
if (!oldDocument.updateTime.isEqual(newDocument.updateTime)) {
let removeChange = deleteDoc(name);
let addChange = addDoc(newDocument);
return new document_change_1.DocumentChange(ChangeType.modified, newDocument, removeChange.oldIndex, addChange.newIndex);
}
return null;
};
// Process the sorted changes in the order that is expected by our
// clients (removals, additions, and then modifications). We also need to
// sort the individual changes to assure that oldIndex/newIndex keep
// incrementing.
let appliedChanges = [];
changes.deletes.sort((name1, name2) => {
// Deletes are sorted based on the order of the existing document.
return self._comparator(updatedMap.get(name1), updatedMap.get(name2));
});
changes.deletes.forEach(name => {
let change = deleteDoc(name);
if (change) {
appliedChanges.push(change);
}
});
changes.adds.sort(self._comparator);
changes.adds.forEach(snapshot => {
let change = addDoc(snapshot);
if (change) {
appliedChanges.push(change);
}
});
changes.updates.sort(self._comparator);
changes.updates.forEach(snapshot => {
let change = modifyDoc(snapshot);
if (change) {
appliedChanges.push(change);
}
});
assert_1.default(updatedTree.length === updatedMap.size, 'The update document ' +
'tree and document map should have the same number of entries.');
return { updatedTree, updatedMap, appliedChanges };
};
/**
* Assembles a new snapshot from the current set of changes and invokes the
* user's callback. Clears the current changes on completion.
*/
const push = function (readTime, nextResumeToken) {
let changes = extractChanges(docMap, changeMap, readTime);
let diff = computeSnapshot(docTree, docMap, changes);
if (!hasPushed || diff.appliedChanges.length > 0) {
logger_1.logger('Watch.onSnapshot', self._requestTag, 'Sending snapshot with %d changes and %d documents', diff.appliedChanges.length, diff.updatedTree.length);
onNext(readTime, diff.updatedTree.length, () => diff.updatedTree.keys, () => diff.appliedChanges);
hasPushed = true;
}
docTree = diff.updatedTree;
docMap = diff.updatedMap;
changeMap.clear();
resumeToken = nextResumeToken;
};
/**
* Returns the current count of all documents, including the changes from
* the current changeMap.
*/
const currentSize = function () {
let changes = extractChanges(docMap, changeMap);
return docMap.size + changes.adds.length - changes.deletes.length;
};
initStream();
stream
.on('data', proto => {
if (proto.targetChange) {
logger_1.logger('Watch.onSnapshot', self._requestTag, 'Processing target change');
const change = proto.targetChange;
const noTargetIds = !change.targetIds || change.targetIds.length === 0;
if (change.targetChangeType === 'NO_CHANGE') {
if (noTargetIds && change.readTime && current) {
// This means everything is up-to-date, so emit the current
// set of docs as a snapshot, if there were changes.
push(timestamp_1.Timestamp.fromProto(change.readTime), change.resumeToken);
}
}
else if (change.targetChangeType === 'ADD') {
if (WATCH_TARGET_ID !== change.targetIds[0]) {
closeStream(Error('Unexpected target ID sent by server'));
}
}
else if (change.targetChangeType === 'REMOVE') {
let code = 13;
let message = 'internal error';
if (change.cause) {
code = change.cause.code;
message = change.cause.message;
}
// @todo: Surface a .code property on the exception.
closeStream(new Error('Error ' + code + ': ' + message));
}
else if (change.targetChangeType === 'RESET') {
// Whatever changes have happened so far no longer matter.
resetDocs();
}
else if (change.targetChangeType === 'CURRENT') {
current = true;
}
else {
closeStream(new Error('Unknown target change type: ' + JSON.stringify(change)));
}
if (change.resumeToken &&
affectsTarget(change.targetIds, WATCH_TARGET_ID)) {
this._backoff.reset();
}
}
else if (proto.documentChange) {
logger_1.logger('Watch.onSnapshot', self._requestTag, 'Processing change event');
// No other targetIds can show up here, but we still need to see
// if the targetId was in the added list or removed list.
const targetIds = proto.documentChange.targetIds || [];
const removedTargetIds = proto.documentChange.removedTargetIds || [];
let changed = false;
let removed = false;
for (let i = 0; i < targetIds.length; i++) {
if (targetIds[i] === WATCH_TARGET_ID) {
changed = true;
}
}
for (let i = 0; i < removedTargetIds.length; i++) {
if (removedTargetIds[i] === WATCH_TARGET_ID) {
removed = true;
}
}
const document = proto.documentChange.document;
const name = document.name;
if (changed) {
logger_1.logger('Watch.onSnapshot', self._requestTag, 'Received document change');
const snapshot = new document_1.DocumentSnapshot.Builder();
snapshot.ref = self._firestore.doc(path_1.ResourcePath.fromSlashSeparatedString(name).relativeName);
snapshot.fieldsProto = document.fields || {};
snapshot.createTime =
timestamp_1.Timestamp.fromProto(document.createTime);
snapshot.updateTime =
timestamp_1.Timestamp.fromProto(document.updateTime);
changeMap.set(name, snapshot);
}
else if (removed) {
logger_1.logger('Watch.onSnapshot', self._requestTag, 'Received document remove');
changeMap.set(name, REMOVED);
}
}
else if (proto.documentDelete || proto.documentRemove) {
logger_1.logger('Watch.onSnapshot', self._requestTag, 'Processing remove event');
const name = (proto.documentDelete || proto.documentRemove).document;
changeMap.set(name, REMOVED);
}
else if (proto.filter) {
logger_1.logger('Watch.onSnapshot', self._requestTag, 'Processing filter update');
if (proto.filter.count !== currentSize()) {
// We need to remove all the current results.
resetDocs();
// The filter didn't match, so re-issue the query.
resetStream();
}
}
else {
closeStream(new Error('Unknown listen response type: ' + JSON.stringify(proto)));
}
})
.on('end', () => {
logger_1.logger('Watch.onSnapshot', self._requestTag, 'Processing stream end');
if (currentStream) {
// Pass the event on to the underlying stream.
currentStream.end();
}
});
return () => {
logger_1.logger('Watch.onSnapshot', self._requestTag, 'Ending stream');
// Prevent further callbacks.
isActive = false;
onNext = () => { };
onError = () => { };
stream.end();
};
}
}
exports.Watch = Watch;

View File

@ -0,0 +1,510 @@
/*!
* Copyright 2017 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
'use strict';
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
const assert_1 = __importDefault(require("assert"));
const is_1 = __importDefault(require("is"));
const logger_1 = require("./logger");
const document_1 = require("./document");
const path_1 = require("./path");
const timestamp_1 = require("./timestamp");
const util_1 = require("./util");
/*!
* Google Cloud Functions terminates idle connections after two minutes. After
* longer periods of idleness, we issue transactional commits to allow for
* retries.
*
* @type {number}
*/
const GCF_IDLE_TIMEOUT_MS = 110 * 1000;
/**
* A WriteResult wraps the write time set by the Firestore servers on sets(),
* updates(), and creates().
*
* @class
*/
class WriteResult {
/**
* @private
* @hideconstructor
*
* @param {Timestamp} writeTime - The time of the corresponding document
* write.
*/
constructor(writeTime) {
this._writeTime = writeTime;
}
/**
* The write time as set by the Firestore servers.
*
* @type {Timestamp}
* @name WriteResult#writeTime
* @readonly
*
* @example
* let documentRef = firestore.doc('col/doc');
*
* documentRef.set({foo: 'bar'}).then(writeResult => {
* console.log(`Document written at: ${writeResult.toDate()}`);
* });
*/
get writeTime() {
return this._writeTime;
}
/**
* Returns true if this `WriteResult` is equal to the provided value.
*
* @param {*} other The value to compare against.
* @return true if this `WriteResult` is equal to the provided value.
*/
isEqual(other) {
return (this === other ||
(is_1.default.instanceof(other, WriteResult) &&
this._writeTime.isEqual(other._writeTime)));
}
}
exports.WriteResult = WriteResult;
/**
* A Firestore WriteBatch that can be used to atomically commit multiple write
* operations at once.
*
* @class
*/
class WriteBatch {
/**
* @private
* @hideconstructor
*
* @param {Firestore} firestore - The Firestore Database client.
*/
constructor(firestore) {
this._firestore = firestore;
this._validator = firestore._validator;
this._serializer = firestore._serializer;
this._writes = [];
this._committed = false;
}
/**
* Checks if this write batch has any pending operations.
*
* @private
* @returns {boolean}
*/
get isEmpty() {
return this._writes.length === 0;
}
/**
* Throws an error if this batch has already been committed.
*
* @private
*/
verifyNotCommitted() {
if (this._committed) {
throw new Error('Cannot modify a WriteBatch that has been committed.');
}
}
/**
* Create a document with the provided object values. This will fail the batch
* if a document exists at its location.
*
* @param {DocumentReference} documentRef - A reference to the
* document to be created.
* @param {DocumentData} data - The object to serialize as the document.
* @returns {WriteBatch} This WriteBatch instance. Used for chaining
* method calls.
*
* @example
* let writeBatch = firestore.batch();
* let documentRef = firestore.collection('col').doc();
*
* writeBatch.create(documentRef, {foo: 'bar'});
*
* writeBatch.commit().then(() => {
* console.log('Successfully executed batch.');
* });
*/
create(documentRef, data) {
this._validator.isDocumentReference('documentRef', documentRef);
this._validator.isDocument('data', data, {
allowEmpty: true,
allowDeletes: 'none',
allowTransforms: true,
});
this.verifyNotCommitted();
const document = document_1.DocumentSnapshot.fromObject(documentRef, data);
const precondition = new document_1.Precondition({ exists: false });
const transform = document_1.DocumentTransform.fromObject(documentRef, data);
transform.validate();
this._writes.push({
write: !document.isEmpty || transform.isEmpty ? document.toProto() : null,
transform: transform.toProto(this._serializer),
precondition: precondition.toProto(),
});
return this;
}
/**
* Deletes a document from the database.
*
* @param {DocumentReference} documentRef - A reference to the
* document to be deleted.
* @param {Precondition=} precondition - A precondition to enforce for this
* delete.
* @param {Timestamp=} precondition.lastUpdateTime If set, enforces that the
* document was last updated at lastUpdateTime. Fails the batch if the
* document doesn't exist or was last updated at a different time.
* @returns {WriteBatch} This WriteBatch instance. Used for chaining
* method calls.
*
* @example
* let writeBatch = firestore.batch();
* let documentRef = firestore.doc('col/doc');
*
* writeBatch.delete(documentRef);
*
* writeBatch.commit().then(() => {
* console.log('Successfully executed batch.');
* });
*/
delete(documentRef, precondition) {
this._validator.isDocumentReference('documentRef', documentRef);
this._validator.isOptionalDeletePrecondition('precondition', precondition);
this.verifyNotCommitted();
const conditions = new document_1.Precondition(precondition);
this._writes.push({
write: {
delete: documentRef.formattedName,
},
precondition: conditions.toProto(),
});
return this;
}
/**
* Write to the document referred to by the provided
* [DocumentReference]{@link DocumentReference}.
* If the document does not exist yet, it will be created. If you pass
* [SetOptions]{@link SetOptions}., the provided data can be merged
* into the existing document.
*
* @param {DocumentReference} documentRef - A reference to the
* document to be set.
* @param {DocumentData} data - The object to serialize as the document.
* @param {SetOptions=} options - An object to configure the set behavior.
* @param {boolean=} options.merge - If true, set() merges the values
* specified in its data argument. Fields omitted from this set() call
* remain untouched.
* @param {Array.<string|FieldPath>=} options.mergeFields - If provided,
* set() only replaces the specified field paths. Any field path that is not
* specified is ignored and remains untouched.
* @returns {WriteBatch} This WriteBatch instance. Used for chaining
* method calls.
*
* @example
* let writeBatch = firestore.batch();
* let documentRef = firestore.doc('col/doc');
*
* writeBatch.set(documentRef, {foo: 'bar'});
*
* writeBatch.commit().then(() => {
* console.log('Successfully executed batch.');
* });
*/
set(documentRef, data, options) {
this._validator.isOptionalSetOptions('options', options);
const mergeLeaves = options && options.merge === true;
const mergePaths = options && options.mergeFields;
this._validator.isDocumentReference('documentRef', documentRef);
this._validator.isDocument('data', data, {
allowEmpty: true,
allowDeletes: mergePaths || mergeLeaves ? 'all' : 'none',
allowTransforms: true,
});
this.verifyNotCommitted();
let documentMask;
if (mergePaths) {
documentMask = document_1.DocumentMask.fromFieldMask(options.mergeFields);
data = documentMask.applyTo(data);
}
const transform = document_1.DocumentTransform.fromObject(documentRef, data);
transform.validate();
const document = document_1.DocumentSnapshot.fromObject(documentRef, data);
if (mergePaths) {
documentMask.removeFields(transform.fields);
}
else {
documentMask = document_1.DocumentMask.fromObject(data);
}
const hasDocumentData = !document.isEmpty || !documentMask.isEmpty;
let write;
if (!mergePaths && !mergeLeaves) {
write = document.toProto();
}
else if (hasDocumentData || transform.isEmpty) {
write = document.toProto();
write.updateMask = documentMask.toProto(this._serializer);
}
this._writes.push({
write,
transform: transform.toProto(this._serializer),
});
return this;
}
/**
* Update fields of the document referred to by the provided
* [DocumentReference]{@link DocumentReference}. If the document
* doesn't yet exist, the update fails and the entire batch will be rejected.
*
* The update() method accepts either an object with field paths encoded as
* keys and field values encoded as values, or a variable number of arguments
* that alternate between field paths and field values. Nested fields can be
* updated by providing dot-separated field path strings or by providing
* FieldPath objects.
*
* A Precondition restricting this update can be specified as the last
* argument.
*
* @param {DocumentReference} documentRef - A reference to the
* document to be updated.
* @param {UpdateData|string|FieldPath} dataOrField - An object
* containing the fields and values with which to update the document
* or the path of the first field to update.
* @param {
* ...(Precondition|*|string|FieldPath)} preconditionOrValues -
* An alternating list of field paths and values to update or a Precondition
* to restrict this update.
* @returns {WriteBatch} This WriteBatch instance. Used for chaining
* method calls.
*
* @example
* let writeBatch = firestore.batch();
* let documentRef = firestore.doc('col/doc');
*
* writeBatch.update(documentRef, {foo: 'bar'});
*
* writeBatch.commit().then(() => {
* console.log('Successfully executed batch.');
* });
*/
update(documentRef, dataOrField, preconditionOrValues) {
this._validator.minNumberOfArguments('update', arguments, 2);
this._validator.isDocumentReference('documentRef', documentRef);
this.verifyNotCommitted();
const updateMap = new Map();
let precondition = new document_1.Precondition({ exists: true });
const argumentError = 'Update() requires either a single JavaScript ' +
'object or an alternating list of field/value pairs that can be ' +
'followed by an optional precondition.';
let usesVarargs = is_1.default.string(dataOrField) || is_1.default.instance(dataOrField, path_1.FieldPath);
if (usesVarargs) {
try {
for (let i = 1; i < arguments.length; i += 2) {
if (i === arguments.length - 1) {
this._validator.isUpdatePrecondition(i, arguments[i]);
precondition = new document_1.Precondition(arguments[i]);
}
else {
this._validator.isFieldPath(i, arguments[i]);
this._validator.minNumberOfArguments('update', arguments, i + 1);
this._validator.isFieldValue(i, arguments[i + 1], {
allowDeletes: 'root',
allowTransforms: true,
});
updateMap.set(path_1.FieldPath.fromArgument(arguments[i]), arguments[i + 1]);
}
}
}
catch (err) {
logger_1.logger('WriteBatch.update', null, 'Varargs validation failed:', err);
// We catch the validation error here and re-throw to provide a better
// error message.
throw new Error(`${argumentError} ${err.message}`);
}
}
else {
try {
this._validator.isDocument('dataOrField', dataOrField, {
allowEmpty: false,
allowDeletes: 'root',
allowTransforms: true,
});
this._validator.maxNumberOfArguments('update', arguments, 3);
Object.keys(dataOrField).forEach(key => {
this._validator.isFieldPath(key, key);
updateMap.set(path_1.FieldPath.fromArgument(key), dataOrField[key]);
});
if (is_1.default.defined(preconditionOrValues)) {
this._validator.isUpdatePrecondition('preconditionOrValues', preconditionOrValues);
precondition = new document_1.Precondition(preconditionOrValues);
}
}
catch (err) {
logger_1.logger('WriteBatch.update', null, 'Non-varargs validation failed:', err);
// We catch the validation error here and prefix the error with a custom
// message to describe the usage of update() better.
throw new Error(`${argumentError} ${err.message}`);
}
}
this._validator.isUpdateMap('dataOrField', updateMap);
let document = document_1.DocumentSnapshot.fromUpdateMap(documentRef, updateMap);
let documentMask = document_1.DocumentMask.fromUpdateMap(updateMap);
let write = null;
if (!document.isEmpty || !documentMask.isEmpty) {
write = document.toProto();
write.updateMask = documentMask.toProto();
}
let transform = document_1.DocumentTransform.fromUpdateMap(documentRef, updateMap);
transform.validate();
this._writes.push({
write: write,
transform: transform.toProto(this._serializer),
precondition: precondition.toProto(),
});
return this;
}
/**
* Atomically commits all pending operations to the database and verifies all
* preconditions. Fails the entire write if any precondition is not met.
*
* @returns {Promise.<Array.<WriteResult>>} A Promise that resolves
* when this batch completes.
*
* @example
* let writeBatch = firestore.batch();
* let documentRef = firestore.doc('col/doc');
*
* writeBatch.set(documentRef, {foo: 'bar'});
*
* writeBatch.commit().then(() => {
* console.log('Successfully executed batch.');
* });
*/
commit() {
return this.commit_();
}
/**
* Commit method that takes an optional transaction ID.
*
* @private
* @param {object=} commitOptions Options to use for this commit.
* @param {bytes=} commitOptions.transactionId The transaction ID of this
* commit.
* @param {string=} commitOptions.requestTag A unique client-assigned
* identifier for this request.
* @returns {Promise.<Array.<WriteResult>>} A Promise that resolves
* when this batch completes.
*/
commit_(commitOptions) {
// Note: We don't call `verifyNotCommitted()` to allow for retries.
let explicitTransaction = commitOptions && commitOptions.transactionId;
let tag = (commitOptions && commitOptions.requestTag) || util_1.requestTag();
let request = {
database: this._firestore.formattedName,
};
// On GCF, we periodically force transactional commits to allow for
// request retries in case GCF closes our backend connection.
if (!explicitTransaction && this._shouldCreateTransaction()) {
logger_1.logger('WriteBatch.commit', tag, 'Using transaction for commit');
return this._firestore.request('beginTransaction', request, tag, true)
.then(resp => {
return this.commit_({ transactionId: resp.transaction });
});
}
request.writes = [];
for (let req of this._writes) {
assert_1.default(req.write || req.transform, 'Either a write or transform must be set');
if (req.precondition) {
(req.write || req.transform).currentDocument = req.precondition;
}
if (req.write) {
request.writes.push(req.write);
}
if (req.transform) {
request.writes.push(req.transform);
}
}
logger_1.logger('WriteBatch.commit', tag, 'Sending %d writes', request.writes.length);
if (explicitTransaction) {
request.transaction = explicitTransaction;
}
this._committed = true;
return this._firestore.request('commit', request, tag).then(resp => {
const writeResults = [];
if (request.writes.length > 0) {
assert_1.default(resp.writeResults instanceof Array &&
request.writes.length === resp.writeResults.length, `Expected one write result per operation, but got ${resp.writeResults.length} results for ${request.writes.length} operations.`);
const commitTime = timestamp_1.Timestamp.fromProto(resp.commitTime);
let offset = 0;
for (let i = 0; i < this._writes.length; ++i) {
let writeRequest = this._writes[i];
// Don't return two write results for a write that contains a
// transform, as the fact that we have to split one write operation
// into two distinct write requests is an implementation detail.
if (writeRequest.write && writeRequest.transform) {
// The document transform is always sent last and produces the
// latest update time.
++offset;
}
let writeResult = resp.writeResults[i + offset];
writeResults.push(new WriteResult(writeResult.updateTime ?
timestamp_1.Timestamp.fromProto(writeResult.updateTime) :
commitTime));
}
}
return writeResults;
});
}
/**
* Determines whether we should issue a transactional commit. On GCF, this
* happens after two minutes of idleness.
*
* @private
* @returns {boolean} Whether to use a transaction.
*/
_shouldCreateTransaction() {
if (!this._firestore._preferTransactions) {
return false;
}
if (this._firestore._lastSuccessfulRequest) {
let now = new Date().getTime();
return now - this._firestore._lastSuccessfulRequest > GCF_IDLE_TIMEOUT_MS;
}
return true;
}
}
exports.WriteBatch = WriteBatch;
/*!
* Validates that the update data does not contain any ambiguous field
* definitions (such as 'a.b' and 'a').
*
* @param {Map.<FieldPath, *>} data - An update map with field/value pairs.
* @returns {boolean} 'true' if the input is a valid update map.
*/
function validateUpdateMap(data) {
const fields = [];
data.forEach((value, key) => {
fields.push(key);
});
fields.sort((left, right) => left.compareTo(right));
for (let i = 1; i < fields.length; ++i) {
if (fields[i - 1].isPrefixOf(fields[i])) {
throw new Error(`Field "${fields[i - 1]}" was specified multiple times.`);
}
}
return true;
}
exports.validateUpdateMap = validateUpdateMap;

View File

@ -0,0 +1,145 @@
{
"_from": "@google-cloud/firestore@^0.16.0",
"_id": "@google-cloud/firestore@0.16.1",
"_inBundle": false,
"_integrity": "sha512-xHb4OdRb0OP0x/8w58WJERtCi9Pr+CsloiUlVAq6fFjSyEcmxgL0V+swE8A/2rI5NGQGwtrN57xwDcis5UM/cQ==",
"_location": "/@google-cloud/firestore",
"_phantomChildren": {},
"_requested": {
"type": "range",
"registry": true,
"raw": "@google-cloud/firestore@^0.16.0",
"name": "@google-cloud/firestore",
"escapedName": "@google-cloud%2ffirestore",
"scope": "@google-cloud",
"rawSpec": "^0.16.0",
"saveSpec": null,
"fetchSpec": "^0.16.0"
},
"_requiredBy": [
"/firebase-admin"
],
"_resolved": "https://registry.npmjs.org/@google-cloud/firestore/-/firestore-0.16.1.tgz",
"_shasum": "aedba92d428cc78f8a6e3eb6ab59e3c8d906f814",
"_spec": "@google-cloud/firestore@^0.16.0",
"_where": "C:\\Users\\jlevi\\Downloads\\tr2022-strategy-master\\tr2022-strategy-master\\data analysis\\functions\\node_modules\\firebase-admin",
"author": {
"name": "Google Inc."
},
"bugs": {
"url": "https://github.com/googleapis/nodejs-firestore/issues"
},
"bundleDependencies": false,
"contributors": [
{
"name": "Alexander Fenster",
"email": "github@fenster.name"
},
{
"name": "Bond",
"email": "bondz@users.noreply.github.com"
},
{
"name": "Jason Dobry",
"email": "jdobry@google.com"
},
{
"name": "Luke Sneeringer",
"email": "lukesneeringer@google.com"
},
{
"name": "Sebastian Schmidt",
"email": "mrschmidt@google.com"
},
{
"name": "Stephen Sawchuk",
"email": "sawchuk@gmail.com"
},
{
"name": "greenkeeper[bot]",
"email": "greenkeeper[bot]@users.noreply.github.com"
}
],
"dependencies": {
"@google-cloud/projectify": "^0.3.0",
"bun": "^0.0.12",
"deep-equal": "^1.0.1",
"extend": "^3.0.1",
"functional-red-black-tree": "^1.0.1",
"google-gax": "^0.18.0",
"google-proto-files": "^0.16.1",
"is": "^3.2.1",
"lodash.merge": "^4.6.1",
"protobufjs": "^6.8.6",
"through2": "^2.0.3"
},
"deprecated": false,
"description": "Firestore Client Library for Node.js",
"devDependencies": {
"@google-cloud/nodejs-repo-tools": "^2.3.0",
"@types/mocha": "^5.2.3",
"@types/node": "^10.3.5",
"chai": "^4.1.2",
"chai-as-promised": "^7.1.1",
"codecov": "^3.0.2",
"duplexify": "^3.6.0",
"gts": "^0.7.1",
"hard-rejection": "^1.0.0",
"ink-docstrap": "git+https://github.com/docstrap/docstrap.git",
"intelli-espower-loader": "^1.0.1",
"jsdoc": "^3.5.5",
"mocha": "^5.2.0",
"nyc": "^12.0.2",
"power-assert": "^1.6.0",
"proxyquire": "^2.0.1",
"source-map-support": "^0.5.6",
"ts-node": "^7.0.0",
"typescript": "^2.9.2"
},
"engines": {
"node": ">=6.0.0"
},
"files": [
"build/protos",
"build/src",
"types"
],
"homepage": "https://github.com/googleapis/nodejs-firestore#readme",
"keywords": [
"google apis client",
"google api client",
"google apis",
"google api",
"google",
"google cloud platform",
"google cloud",
"cloud",
"firestore"
],
"license": "Apache-2.0",
"main": "./build/src/index.js",
"name": "@google-cloud/firestore",
"repository": {
"type": "git",
"url": "git+https://github.com/googleapis/nodejs-firestore.git"
},
"scripts": {
"check": "gts check",
"clean": "gts clean",
"codecov": "nyc report --reporter=json && codecov -f .coverage/*.json",
"compile": "tsc -p . && cp -r dev/protos build && cp -r dev/test/fake-certificate.json build/test/fake-certificate.json && cp dev/src/v1beta1/firestore_client_config.json build/src/v1beta1/ && cp dev/conformance/test-definition.proto build/conformance && cp dev/conformance/test-suite.binproto build/conformance",
"conformance": "mocha build/conformance",
"docs": "jsdoc -c .jsdoc.js",
"fix": "gts fix",
"generate-scaffolding": "repo-tools generate all",
"posttest": "npm run check",
"predocs": "npm run compile",
"prepare": "npm run compile",
"pretest-only": "npm run compile",
"system-test": "mocha build/system-test --timeout 600000",
"test": "npm run test-only",
"test-only": "nyc mocha build/test"
},
"types": "./types/firestore.d.ts",
"version": "0.16.1"
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,58 @@
# Changelog
[npm history][1]
[1]: https://www.npmjs.com/package/@google-cloud/projectify?activeTab=versions
## v0.3.2
### Bug fixes
- fix: do not replace projectId on stream objects ([#53](https://github.com/googleapis/nodejs-projectify/pull/53))
### Dependencies
- chore(deps): update dependency gts to ^0.9.0 ([#52](https://github.com/googleapis/nodejs-projectify/pull/52))
### Internal / Testing Changes
- chore: update eslintignore config ([#51](https://github.com/googleapis/nodejs-projectify/pull/51))
- chore: use latest npm on Windows ([#50](https://github.com/googleapis/nodejs-projectify/pull/50))
- chore: update CircleCI config ([#49](https://github.com/googleapis/nodejs-projectify/pull/49))
- chore: include build in eslintignore ([#46](https://github.com/googleapis/nodejs-projectify/pull/46))
## v0.3.1
### Implementation Changes
- fix: replaceProjectId should not fail when passed a Buffer ([#43](https://github.com/googleapis/nodejs-projectify/pull/43))
### Dependencies
- chore(deps): update dependency nyc to v13 ([#13](https://github.com/googleapis/nodejs-projectify/pull/13))
- chore(deps): lock file maintenance ([#11](https://github.com/googleapis/nodejs-projectify/pull/11))
- chore(deps): lock file maintenance ([#8](https://github.com/googleapis/nodejs-projectify/pull/8))
- chore(deps): update dependency typescript to v3 ([#7](https://github.com/googleapis/nodejs-projectify/pull/7))
- chore(deps): update dependency gts to ^0.8.0 ([#2](https://github.com/googleapis/nodejs-projectify/pull/2))
- chore(deps): lock file maintenance ([#4](https://github.com/googleapis/nodejs-projectify/pull/4))
- chore(deps): lock file maintenance ([#3](https://github.com/googleapis/nodejs-projectify/pull/3))
### Internal / Testing Changes
- chore: update issue templates ([#40](https://github.com/googleapis/nodejs-projectify/pull/40))
- chore: remove old issue template ([#38](https://github.com/googleapis/nodejs-projectify/pull/38))
- build: run tests on node11 ([#37](https://github.com/googleapis/nodejs-projectify/pull/37))
- chores(build): run codecov on continuous builds ([#34](https://github.com/googleapis/nodejs-projectify/pull/34))
- chores(build): do not collect sponge.xml from windows builds ([#35](https://github.com/googleapis/nodejs-projectify/pull/35))
- chore: update new issue template ([#33](https://github.com/googleapis/nodejs-projectify/pull/33))
- build: fix codecov uploading on Kokoro ([#30](https://github.com/googleapis/nodejs-projectify/pull/30))
- Update kokoro config ([#28](https://github.com/googleapis/nodejs-projectify/pull/28))
- Update CI config ([#26](https://github.com/googleapis/nodejs-projectify/pull/26))
- Don't publish sourcemaps ([#24](https://github.com/googleapis/nodejs-projectify/pull/24))
- build: prevent system/sample-test from leaking credentials
- Update kokoro config ([#22](https://github.com/googleapis/nodejs-projectify/pull/22))
- test: remove appveyor config ([#21](https://github.com/googleapis/nodejs-projectify/pull/21))
- Update CI config ([#20](https://github.com/googleapis/nodejs-projectify/pull/20))
- Enable prefer-const in the eslint config ([#19](https://github.com/googleapis/nodejs-projectify/pull/19))
- Enable no-var in eslint ([#18](https://github.com/googleapis/nodejs-projectify/pull/18))
- Update CI config ([#17](https://github.com/googleapis/nodejs-projectify/pull/17))
- Add synth and update CI config ([#15](https://github.com/googleapis/nodejs-projectify/pull/15))
- chore: ignore package-lock.json ([#12](https://github.com/googleapis/nodejs-projectify/pull/12))
- chore: update renovate config ([#10](https://github.com/googleapis/nodejs-projectify/pull/10))
- remove that whitespace ([#9](https://github.com/googleapis/nodejs-projectify/pull/9))
- chore: assert.deelEqual => assert.deepStrictEqual ([#6](https://github.com/googleapis/nodejs-projectify/pull/6))
- chore: move mocha options to mocha.opts ([#5](https://github.com/googleapis/nodejs-projectify/pull/5))

View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,15 @@
<img src="https://avatars2.githubusercontent.com/u/2810941?v=3&s=96" alt="Google Cloud Platform logo" title="Google Cloud Platform" align="right" height="96" width="96"/>
# @google-cloud/projectify
> A simple utility for replacing the projectid token in objects.
## Contributing
Contributions welcome! See the [Contributing Guide](https://github.com/googlecloudplatform/google-cloud-node/blob/master/.github/CONTRIBUTING.md).
## License
Apache Version 2.0
See [LICENSE](https://github.com/googlecloudplatform/google-cloud-node/blob/master/LICENSE)

View File

@ -0,0 +1,31 @@
/**
* Copyright 2014 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Populate the `{{projectId}}` placeholder.
*
* @throws {Error} If a projectId is required, but one is not provided.
*
* @param {*} - Any input value that may contain a placeholder. Arrays and objects will be looped.
* @param {string} projectId - A projectId. If not provided
* @return {*} - The original argument with all placeholders populated.
*/
export declare function replaceProjectIdToken(value: any, projectId: string): any;
/**
* Custom error type for missing project ID errors.
*/
export declare class MissingProjectIdError extends Error {
message: string;
}

View File

@ -0,0 +1,64 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
const stream_1 = require("stream");
/**
* Copyright 2014 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Populate the `{{projectId}}` placeholder.
*
* @throws {Error} If a projectId is required, but one is not provided.
*
* @param {*} - Any input value that may contain a placeholder. Arrays and objects will be looped.
* @param {string} projectId - A projectId. If not provided
* @return {*} - The original argument with all placeholders populated.
*/
// tslint:disable-next-line:no-any
function replaceProjectIdToken(value, projectId) {
if (Array.isArray(value)) {
value = value.map(v => replaceProjectIdToken(v, projectId));
}
if (value !== null && typeof value === 'object' &&
!(value instanceof Buffer) && !(value instanceof stream_1.Stream) &&
typeof value.hasOwnProperty === 'function') {
for (const opt in value) {
if (value.hasOwnProperty(opt)) {
value[opt] = replaceProjectIdToken(value[opt], projectId);
}
}
}
if (typeof value === 'string' &&
value.indexOf('{{projectId}}') > -1) {
if (!projectId || projectId === '{{projectId}}') {
throw new MissingProjectIdError();
}
value = value.replace(/{{projectId}}/g, projectId);
}
return value;
}
exports.replaceProjectIdToken = replaceProjectIdToken;
/**
* Custom error type for missing project ID errors.
*/
class MissingProjectIdError extends Error {
constructor() {
super(...arguments);
this.message = `Sorry, we cannot connect to Cloud Services without a project
ID. You may specify one with an environment variable named
"GOOGLE_CLOUD_PROJECT".`.replace(/ +/g, ' ');
}
}
exports.MissingProjectIdError = MissingProjectIdError;
//# sourceMappingURL=index.js.map

View File

@ -0,0 +1,86 @@
{
"_from": "@google-cloud/projectify@^0.3.0",
"_id": "@google-cloud/projectify@0.3.2",
"_inBundle": false,
"_integrity": "sha512-t1bs5gE105IpgikX7zPCJZzVyXM5xZ/1kJomUPim2E2pNp4OUUFNyvKm/T2aM6GBP2F30o8abCD+/wbOhHWYYA==",
"_location": "/@google-cloud/projectify",
"_phantomChildren": {},
"_requested": {
"type": "range",
"registry": true,
"raw": "@google-cloud/projectify@^0.3.0",
"name": "@google-cloud/projectify",
"escapedName": "@google-cloud%2fprojectify",
"scope": "@google-cloud",
"rawSpec": "^0.3.0",
"saveSpec": null,
"fetchSpec": "^0.3.0"
},
"_requiredBy": [
"/@google-cloud/firestore"
],
"_resolved": "https://registry.npmjs.org/@google-cloud/projectify/-/projectify-0.3.2.tgz",
"_shasum": "ed54c98cae646dc03a742eac288184a13d33a4c2",
"_spec": "@google-cloud/projectify@^0.3.0",
"_where": "C:\\Users\\jlevi\\Downloads\\tr2022-strategy-master\\tr2022-strategy-master\\data analysis\\functions\\node_modules\\@google-cloud\\firestore",
"author": {
"name": "Google Inc."
},
"bugs": {
"url": "https://github.com/googleapis/nodejs-projectify/issues"
},
"bundleDependencies": false,
"deprecated": false,
"description": "A simple utility for replacing the projectid token in objects.",
"devDependencies": {
"@types/mocha": "^5.2.4",
"@types/node": "^10.5.2",
"codecov": "^3.0.4",
"gts": "^0.9.0",
"hard-rejection": "^1.0.0",
"intelli-espower-loader": "^1.0.1",
"mocha": "^5.2.0",
"nyc": "^13.0.0",
"source-map-support": "^0.5.6",
"typescript": "^3.0.0"
},
"files": [
"build/src",
"!build/src/**/*.map"
],
"homepage": "https://github.com/googleapis/nodejs-projectify#readme",
"keywords": [],
"license": "Apache-2.0",
"main": "build/src/index.js",
"name": "@google-cloud/projectify",
"nyc": {
"exclude": [
"build/test"
]
},
"publishConfig": {
"access": "public"
},
"repository": {
"type": "git",
"url": "git+https://github.com/googleapis/nodejs-projectify.git"
},
"scripts": {
"clean": "gts clean",
"codecov": "nyc report --reporter=json && codecov -f coverage/*.json",
"compile": "tsc -p .",
"docs": "echo no docs 👻",
"fix": "gts fix",
"lint": "gts check",
"posttest": "npm run lint",
"prepare": "npm run compile",
"presystem-test": "npm run compile",
"pretest": "npm run compile",
"samples-test": "cd samples/ && npm link ../ && npm test && cd ../",
"system-test": "mocha build/system-test",
"test": "npm run test-only",
"test-only": "nyc mocha build/test"
},
"types": "build/src/index.d.ts",
"version": "0.3.2"
}

View File

@ -0,0 +1,26 @@
# The names of individuals who have contributed to this project.
#
# Names are formatted as:
# name <email>
#
Ace Nassri <anassri@google.com>
Alexander Borovykh <immaculate.pine@gmail.com>
Alexander Fenster <github@fenster.name>
Calvin Metcalf <calvin.metcalf@gmail.com>
Colin Ihrig <cjihrig@gmail.com>
Cristian Almstrand <almstrand@users.noreply.github.com>
Dave Gramlich <callmehiphop@gmail.com>
Dominic Valenciana <kiricon@live.com>
Eric Uldall <ericuldall@gmail.com>
Ernest Landrito <landrito@google.com>
Frank Natividad <frankyn@users.noreply.github.com>
Jason Dobry <jason.dobry@gmail.com>
Jason Dobry <jmdobry@users.noreply.github.com>
Justin Sprigg <justin.sprigg@gmail.com>
Luke Sneeringer <luke@sneeringer.com>
Stephen <stephenplusplus@users.noreply.github.com>
Stephen Sawchuk <sawchuk@gmail.com>
Stephen Sawchuk <stephenplusplus@users.noreply.github.com>
Tyler Johnson <mail@tyler-johnson.ca>
Zach Bjornson <bjornson@stanford.edu>
greenkeeper[bot] <greenkeeper[bot]@users.noreply.github.com>

View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,134 @@
<img src="https://avatars2.githubusercontent.com/u/2810941?v=3&s=96" alt="Google Cloud Platform logo" title="Google Cloud Platform" align="right" height="96" width="96"/>
# [Google Cloud Storage: Node.js Client](https://github.com/googleapis/nodejs-storage)
[![release level](https://img.shields.io/badge/release%20level-general%20availability%20%28GA%29-brightgreen.svg?style&#x3D;flat)](https://cloud.google.com/terms/launch-stages)
[![CircleCI](https://img.shields.io/circleci/project/github/googleapis/nodejs-storage.svg?style=flat)](https://circleci.com/gh/googleapis/nodejs-storage)
[![AppVeyor](https://ci.appveyor.com/api/projects/status/github/googleapis/nodejs-storage?branch=master&svg=true)](https://ci.appveyor.com/project/googleapis/nodejs-storage)
[![codecov](https://img.shields.io/codecov/c/github/googleapis/nodejs-storage/master.svg?style=flat)](https://codecov.io/gh/googleapis/nodejs-storage)
> Node.js idiomatic client for [Cloud Storage][product-docs].
[Cloud Storage](https://cloud.google.com/storage/docs) allows world-wide storage and retrieval of any amount of data at any time. You can use Google Cloud Storage for a range of scenarios including serving website content, storing data for archival and disaster recovery, or distributing large data objects to users via direct download.
* [Cloud Storage Node.js Client API Reference][client-docs]
* [github.com/googleapis/nodejs-storage](https://github.com/googleapis/nodejs-storage)
* [Cloud Storage Documentation][product-docs]
Read more about the client libraries for Cloud APIs, including the older
Google APIs Client Libraries, in [Client Libraries Explained][explained].
[explained]: https://cloud.google.com/apis/docs/client-libraries-explained
**Table of contents:**
* [Quickstart](#quickstart)
* [Before you begin](#before-you-begin)
* [Installing the client library](#installing-the-client-library)
* [Using the client library](#using-the-client-library)
* [Samples](#samples)
* [Versioning](#versioning)
* [Contributing](#contributing)
* [License](#license)
## Quickstart
### Before you begin
1. Select or create a Cloud Platform project.
[Go to the projects page][projects]
1. Enable billing for your project.
[Enable billing][billing]
1. Enable the Google Cloud Storage API.
[Enable the API][enable_api]
1. [Set up authentication with a service account][auth] so you can access the
API from your local workstation.
[projects]: https://console.cloud.google.com/project
[billing]: https://support.google.com/cloud/answer/6293499#enable-billing
[enable_api]: https://console.cloud.google.com/flows/enableapi?apiid=storage-api.googleapis.com
[auth]: https://cloud.google.com/docs/authentication/getting-started
### Installing the client library
npm install --save @google-cloud/storage
### Using the client library
```javascript
// Imports the Google Cloud client library
const Storage = require('@google-cloud/storage');
// Your Google Cloud Platform project ID
const projectId = 'YOUR_PROJECT_ID';
// Creates a client
const storage = new Storage({
projectId: projectId,
});
// The name for the new bucket
const bucketName = 'my-new-bucket';
// Creates the new bucket
storage
.createBucket(bucketName)
.then(() => {
console.log(`Bucket ${bucketName} created.`);
})
.catch(err => {
console.error('ERROR:', err);
});
```
## Samples
Samples are in the [`samples/`](https://github.com/googleapis/nodejs-storage/tree/master/samples) directory. The samples' `README.md`
has instructions for running the samples.
| Sample | Source Code | Try it |
| --------------------------- | --------------------------------- | ------ |
| ACL (Access Control Lists) | [source code](https://github.com/googleapis/nodejs-storage/blob/master/samples/acl.js) | [![Open in Cloud Shell][shell_img]](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/googleapis/nodejs-storage&page=editor&open_in_editor=samples/acl.js,samples/README.md) |
| Buckets | [source code](https://github.com/googleapis/nodejs-storage/blob/master/samples/buckets.js) | [![Open in Cloud Shell][shell_img]](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/googleapis/nodejs-storage&page=editor&open_in_editor=samples/buckets.js,samples/README.md) |
| Encryption | [source code](https://github.com/googleapis/nodejs-storage/blob/master/samples/encryption.js) | [![Open in Cloud Shell][shell_img]](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/googleapis/nodejs-storage&page=editor&open_in_editor=samples/encryption.js,samples/README.md) |
| Files | [source code](https://github.com/googleapis/nodejs-storage/blob/master/samples/files.js) | [![Open in Cloud Shell][shell_img]](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/googleapis/nodejs-storage&page=editor&open_in_editor=samples/files.js,samples/README.md) |
| Notifications | [source code](https://github.com/googleapis/nodejs-storage/blob/master/samples/notifications.js) | [![Open in Cloud Shell][shell_img]](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/googleapis/nodejs-storage&page=editor&open_in_editor=samples/notifications.js,samples/README.md) |
| Requester Pays | [source code](https://github.com/googleapis/nodejs-storage/blob/master/samples/requesterPays.js) | [![Open in Cloud Shell][shell_img]](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/googleapis/nodejs-storage&page=editor&open_in_editor=samples/requesterPays.js,samples/README.md) |
The [Cloud Storage Node.js Client API Reference][client-docs] documentation
also contains samples.
## Versioning
This library follows [Semantic Versioning](http://semver.org/).
This library is considered to be **General Availability (GA)**. This means it
is stable; the code surface will not change in backwards-incompatible ways
unless absolutely necessary (e.g. because of critical security issues) or with
an extensive deprecation period. Issues and requests against **GA** libraries
are addressed with the highest priority.
More Information: [Google Cloud Platform Launch Stages][launch_stages]
[launch_stages]: https://cloud.google.com/terms/launch-stages
## Contributing
Contributions welcome! See the [Contributing Guide](https://github.com/googleapis/nodejs-storage/blob/master/.github/CONTRIBUTING.md).
## License
Apache Version 2.0
See [LICENSE](https://github.com/googleapis/nodejs-storage/blob/master/LICENSE)
[client-docs]: https://cloud.google.com/nodejs/docs/reference/storage/latest/
[product-docs]: https://cloud.google.com/storage/docs
[shell_img]: //gstatic.com/cloudssh/images/open-btn.png

View File

@ -0,0 +1,209 @@
{
"_from": "@google-cloud/storage@^1.6.0",
"_id": "@google-cloud/storage@1.7.0",
"_inBundle": false,
"_integrity": "sha512-QaAxzCkbhspwajoaEnT0GcnQcpjPRcBrHYuQsXtD05BtOJgVnHCLXSsfUiRdU0nVpK+Thp7+sTkQ0fvk5PanKg==",
"_location": "/@google-cloud/storage",
"_phantomChildren": {},
"_requested": {
"type": "range",
"registry": true,
"raw": "@google-cloud/storage@^1.6.0",
"name": "@google-cloud/storage",
"escapedName": "@google-cloud%2fstorage",
"scope": "@google-cloud",
"rawSpec": "^1.6.0",
"saveSpec": null,
"fetchSpec": "^1.6.0"
},
"_requiredBy": [
"/firebase-admin"
],
"_resolved": "https://registry.npmjs.org/@google-cloud/storage/-/storage-1.7.0.tgz",
"_shasum": "07bff573d92d5c294db6a04af246688875a8f74b",
"_spec": "@google-cloud/storage@^1.6.0",
"_where": "C:\\Users\\jlevi\\Downloads\\tr2022-strategy-master\\tr2022-strategy-master\\data analysis\\functions\\node_modules\\firebase-admin",
"author": {
"name": "Google Inc."
},
"bugs": {
"url": "https://github.com/googleapis/nodejs-storage/issues"
},
"bundleDependencies": false,
"contributors": [
{
"name": "Ace Nassri",
"email": "anassri@google.com"
},
{
"name": "Alexander Borovykh",
"email": "immaculate.pine@gmail.com"
},
{
"name": "Alexander Fenster",
"email": "github@fenster.name"
},
{
"name": "Calvin Metcalf",
"email": "calvin.metcalf@gmail.com"
},
{
"name": "Colin Ihrig",
"email": "cjihrig@gmail.com"
},
{
"name": "Cristian Almstrand",
"email": "almstrand@users.noreply.github.com"
},
{
"name": "Dave Gramlich",
"email": "callmehiphop@gmail.com"
},
{
"name": "Dominic Valenciana",
"email": "kiricon@live.com"
},
{
"name": "Eric Uldall",
"email": "ericuldall@gmail.com"
},
{
"name": "Ernest Landrito",
"email": "landrito@google.com"
},
{
"name": "Frank Natividad",
"email": "frankyn@users.noreply.github.com"
},
{
"name": "Jason Dobry",
"email": "jason.dobry@gmail.com"
},
{
"name": "Jason Dobry",
"email": "jmdobry@users.noreply.github.com"
},
{
"name": "Justin Sprigg",
"email": "justin.sprigg@gmail.com"
},
{
"name": "Luke Sneeringer",
"email": "luke@sneeringer.com"
},
{
"name": "Stephen",
"email": "stephenplusplus@users.noreply.github.com"
},
{
"name": "Stephen Sawchuk",
"email": "sawchuk@gmail.com"
},
{
"name": "Stephen Sawchuk",
"email": "stephenplusplus@users.noreply.github.com"
},
{
"name": "Tyler Johnson",
"email": "mail@tyler-johnson.ca"
},
{
"name": "Zach Bjornson",
"email": "bjornson@stanford.edu"
},
{
"name": "greenkeeper[bot]",
"email": "greenkeeper[bot]@users.noreply.github.com"
}
],
"dependencies": {
"@google-cloud/common": "^0.17.0",
"arrify": "^1.0.0",
"async": "^2.0.1",
"compressible": "^2.0.12",
"concat-stream": "^1.5.0",
"create-error-class": "^3.0.2",
"duplexify": "^3.5.0",
"extend": "^3.0.0",
"gcs-resumable-upload": "^0.10.2",
"hash-stream-validation": "^0.2.1",
"is": "^3.0.1",
"mime": "^2.2.0",
"mime-types": "^2.0.8",
"once": "^1.3.1",
"pumpify": "^1.5.1",
"request": "^2.85.0",
"safe-buffer": "^5.1.1",
"snakeize": "^0.1.0",
"stream-events": "^1.0.1",
"through2": "^2.0.0",
"xdg-basedir": "^3.0.0"
},
"deprecated": false,
"description": "Cloud Storage Client Library for Node.js",
"devDependencies": {
"@google-cloud/nodejs-repo-tools": "^2.2.3",
"@google-cloud/pubsub": "*",
"codecov": "^3.0.0",
"eslint": "^4.7.1",
"eslint-config-prettier": "^2.5.0",
"eslint-plugin-node": "^6.0.0",
"eslint-plugin-prettier": "^2.3.1",
"ink-docstrap": "https://github.com/docstrap/docstrap/tarball/master",
"intelli-espower-loader": "^1.0.1",
"jsdoc": "^3.5.4",
"mocha": "^5.0.0",
"normalize-newline": "^3.0.0",
"nyc": "^11.1.0",
"power-assert": "^1.4.4",
"prettier": "^1.7.0",
"prop-assign": "^1.0.0",
"propprop": "^0.3.0",
"proxyquire": "^2.0.0",
"semistandard": "^12.0.0",
"tmp": "^0.0.33",
"uuid": "^3.1.0"
},
"engines": {
"node": ">=4"
},
"files": [
"src",
"AUTHORS",
"CONTRIBUTORS",
"COPYING"
],
"homepage": "https://github.com/googleapis/nodejs-storage#readme",
"keywords": [
"google apis client",
"google api client",
"google apis",
"google api",
"google",
"google cloud platform",
"google cloud",
"cloud",
"google storage",
"storage"
],
"license": "Apache-2.0",
"main": "./src/index.js",
"name": "@google-cloud/storage",
"repository": {
"type": "git",
"url": "git+https://github.com/googleapis/nodejs-storage.git"
},
"scripts": {
"all-test": "npm test && npm run system-test && npm run samples-test",
"cover": "nyc --reporter=lcov mocha --require intelli-espower-loader test/*.js && nyc report",
"docs": "repo-tools exec -- jsdoc -c .jsdoc.js",
"generate-scaffolding": "repo-tools generate all && repo-tools generate lib_samples_readme -l samples/ --config ../.cloud-repo-tools.json",
"lint": "repo-tools lint --cmd eslint -- src/ samples/ system-test/ test/",
"prettier": "repo-tools exec -- prettier --write src/**/*.js samples/*.js samples/**/*.js system-test/**/*.js test/**/*.js",
"samples-test": "npm link && cd samples/ && npm link @google-cloud/storage && npm test && cd ../",
"system-test": "repo-tools test run --cmd mocha -- system-test/ --timeout 600000",
"test": "repo-tools test run --cmd npm -- run cover",
"test-no-cover": "repo-tools test run --cmd mocha -- test/"
},
"version": "1.7.0"
}

View File

@ -0,0 +1,766 @@
/*!
* Copyright 2014 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
'use strict';
const arrify = require('arrify');
const common = require('@google-cloud/common');
const extend = require('extend');
const is = require('is');
const util = require('util');
/**
* Cloud Storage uses access control lists (ACLs) to manage object and
* bucket access. ACLs are the mechanism you use to share objects with other
* users and allow other users to access your buckets and objects.
*
* An ACL consists of one or more entries, where each entry grants permissions
* to an entity. Permissions define the actions that can be performed against an
* object or bucket (for example, `READ` or `WRITE`); the entity defines who the
* permission applies to (for example, a specific user or group of users).
*
* Where an `entity` value is accepted, we follow the format the Cloud Storage
* API expects.
*
* Refer to
* https://cloud.google.com/storage/docs/json_api/v1/defaultObjectAccessControls
* for the most up-to-date values.
*
* - `user-userId`
* - `user-email`
* - `group-groupId`
* - `group-email`
* - `domain-domain`
* - `project-team-projectId`
* - `allUsers`
* - `allAuthenticatedUsers`
*
* Examples:
*
* - The user "liz@example.com" would be `user-liz@example.com`.
* - The group "example@googlegroups.com" would be
* `group-example@googlegroups.com`.
* - To refer to all members of the Google Apps for Business domain
* "example.com", the entity would be `domain-example.com`.
*
* For more detailed information, see
* [About Access Control Lists](http://goo.gl/6qBBPO).
*
* @constructor Acl
* @mixin
* @param {object} options Configuration options.
*/
function Acl(options) {
AclRoleAccessorMethods.call(this);
this.pathPrefix = options.pathPrefix;
this.request_ = options.request;
}
/**
* An object of convenience methods to add or delete owner ACL permissions for a
* given entity.
*
* The supported methods include:
*
* - `myFile.acl.owners.addAllAuthenticatedUsers`
* - `myFile.acl.owners.deleteAllAuthenticatedUsers`
* - `myFile.acl.owners.addAllUsers`
* - `myFile.acl.owners.deleteAllUsers`
* - `myFile.acl.owners.addDomain`
* - `myFile.acl.owners.deleteDomain`
* - `myFile.acl.owners.addGroup`
* - `myFile.acl.owners.deleteGroup`
* - `myFile.acl.owners.addProject`
* - `myFile.acl.owners.deleteProject`
* - `myFile.acl.owners.addUser`
* - `myFile.acl.owners.deleteUser`
*
* @return {object}
*
* @example
* const storage = require('@google-cloud/storage')();
* const myBucket = storage.bucket('my-bucket');
* const myFile = myBucket.file('my-file');
*
* //-
* // Add a user as an owner of a file.
* //-
* const myBucket = gcs.bucket('my-bucket');
* const myFile = myBucket.file('my-file');
* myFile.acl.owners.addUser('email@example.com', function(err, aclObject) {});
*
* //-
* // For reference, the above command is the same as running the following.
* //-
* myFile.acl.add({
* entity: 'user-email@example.com',
* role: gcs.acl.OWNER_ROLE
* }, function(err, aclObject) {});
*
* //-
* // If the callback is omitted, we'll return a Promise.
* //-
* myFile.acl.owners.addUser('email@example.com').then(function(data) {
* const aclObject = data[0];
* const apiResponse = data[1];
* });
*/
Acl.prototype.owners = {};
/**
* An object of convenience methods to add or delete reader ACL permissions for
* a given entity.
*
* The supported methods include:
*
* - `myFile.acl.readers.addAllAuthenticatedUsers`
* - `myFile.acl.readers.deleteAllAuthenticatedUsers`
* - `myFile.acl.readers.addAllUsers`
* - `myFile.acl.readers.deleteAllUsers`
* - `myFile.acl.readers.addDomain`
* - `myFile.acl.readers.deleteDomain`
* - `myFile.acl.readers.addGroup`
* - `myFile.acl.readers.deleteGroup`
* - `myFile.acl.readers.addProject`
* - `myFile.acl.readers.deleteProject`
* - `myFile.acl.readers.addUser`
* - `myFile.acl.readers.deleteUser`
*
* @return {object}
*
* @example
* const storage = require('@google-cloud/storage')();
* const myBucket = storage.bucket('my-bucket');
* const myFile = myBucket.file('my-file');
*
* //-
* // Add a user as a reader of a file.
* //-
* myFile.acl.readers.addUser('email@example.com', function(err, aclObject) {});
*
* //-
* // For reference, the above command is the same as running the following.
* //-
* myFile.acl.add({
* entity: 'user-email@example.com',
* role: gcs.acl.READER_ROLE
* }, function(err, aclObject) {});
*
* //-
* // If the callback is omitted, we'll return a Promise.
* //-
* myFile.acl.readers.addUser('email@example.com').then(function(data) {
* const aclObject = data[0];
* const apiResponse = data[1];
* });
*/
Acl.prototype.readers = {};
/**
* An object of convenience methods to add or delete writer ACL permissions for
* a given entity.
*
* The supported methods include:
*
* - `myFile.acl.writers.addAllAuthenticatedUsers`
* - `myFile.acl.writers.deleteAllAuthenticatedUsers`
* - `myFile.acl.writers.addAllUsers`
* - `myFile.acl.writers.deleteAllUsers`
* - `myFile.acl.writers.addDomain`
* - `myFile.acl.writers.deleteDomain`
* - `myFile.acl.writers.addGroup`
* - `myFile.acl.writers.deleteGroup`
* - `myFile.acl.writers.addProject`
* - `myFile.acl.writers.deleteProject`
* - `myFile.acl.writers.addUser`
* - `myFile.acl.writers.deleteUser`
*
* @return {object}
*
* @example
* const storage = require('@google-cloud/storage')();
* const myBucket = storage.bucket('my-bucket');
* const myFile = myBucket.file('my-file');
*
* //-
* // Add a user as a writer of a file.
* //-
* myFile.acl.writers.addUser('email@example.com', function(err, aclObject) {});
*
* //-
* // For reference, the above command is the same as running the following.
* //-
* myFile.acl.add({
* entity: 'user-email@example.com',
* role: gcs.acl.WRITER_ROLE
* }, function(err, aclObject) {});
*
* //-
* // If the callback is omitted, we'll return a Promise.
* //-
* myFile.acl.writers.addUser('email@example.com').then(function(data) {
* const aclObject = data[0];
* const apiResponse = data[1];
* });
*/
Acl.prototype.writers = {};
util.inherits(Acl, AclRoleAccessorMethods);
/**
* @typedef {array} AddAclResponse
* @property {object} 0 The Acl Objects.
* @property {object} 1 The full API response.
*/
/**
* @callback AddAclCallback
* @param {?Error} err Request error, if any.
* @param {object} acl The Acl Objects.
* @param {object} apiResponse The full API response.
*/
/**
* Add access controls on a {@link Bucket} or {@link File}.
*
* @see [BucketAccessControls: insert API Documentation]{@link https://cloud.google.com/storage/docs/json_api/v1/bucketAccessControls/insert}
* @see [ObjectAccessControls: insert API Documentation]{@link https://cloud.google.com/storage/docs/json_api/v1/objectAccessControls/insert}
*
* @param {object} options Configuration options.
* @param {string} options.entity Whose permissions will be added.
* @param {string} options.role Permissions allowed for the defined entity.
* See {@link https://cloud.google.com/storage/docs/access-control Access Control}.
* @param {number} [options.generation] **File Objects Only** Select a specific
* revision of this file (as opposed to the latest version, the default).
* @param {string} [options.userProject] The ID of the project which will be
* billed for the request.
* @param {AddAclCallback} [callback] Callback function.
* @returns {Promise<AddAclResponse>}
*
* @example
* const storage = require('@google-cloud/storage')();
* const myBucket = storage.bucket('my-bucket');
* const myFile = myBucket.file('my-file');
*
* const options = {
* entity: 'user-useremail@example.com',
* role: gcs.acl.OWNER_ROLE
* };
*
* myBucket.acl.add(options, function(err, aclObject, apiResponse) {});
*
* //-
* // For file ACL operations, you can also specify a `generation` property.
* // Here is how you would grant ownership permissions to a user on a specific
* // revision of a file.
* //-
* myFile.acl.add({
* entity: 'user-useremail@example.com',
* role: gcs.acl.OWNER_ROLE,
* generation: 1
* }, function(err, aclObject, apiResponse) {});
*
* //-
* // If the callback is omitted, we'll return a Promise.
* //-
* myBucket.acl.add(options).then(function(data) {
* const aclObject = data[0];
* const apiResponse = data[1];
* });
*
* @example <caption>include:samples/acl.js</caption>
* region_tag:storage_add_file_owner
* Example of adding an owner to a file:
*
* @example <caption>include:samples/acl.js</caption>
* region_tag:storage_add_bucket_owner
* Example of adding an owner to a bucket:
*
* @example <caption>include:samples/acl.js</caption>
* region_tag:storage_add_bucket_default_owner
* Example of adding a default owner to a bucket:
*/
Acl.prototype.add = function(options, callback) {
const self = this;
const query = {};
if (options.generation) {
query.generation = options.generation;
}
if (options.userProject) {
query.userProject = options.userProject;
}
this.request(
{
method: 'POST',
uri: '',
qs: query,
json: {
entity: options.entity,
role: options.role.toUpperCase(),
},
},
function(err, resp) {
if (err) {
callback(err, null, resp);
return;
}
callback(null, self.makeAclObject_(resp), resp);
}
);
};
/**
* @typedef {array} RemoveAclResponse
* @property {object} 0 The full API response.
*/
/**
* @callback RemoveAclCallback
* @param {?Error} err Request error, if any.
* @param {object} apiResponse The full API response.
*/
/**
* Delete access controls on a {@link Bucket} or {@link File}.
*
* @see [BucketAccessControls: delete API Documentation]{@link https://cloud.google.com/storage/docs/json_api/v1/bucketAccessControls/delete}
* @see [ObjectAccessControls: delete API Documentation]{@link https://cloud.google.com/storage/docs/json_api/v1/objectAccessControls/delete}
*
* @param {object} options Configuration object.
* @param {string} options.entity Whose permissions will be revoked.
* @param {int} [options.generation] **File Objects Only** Select a specific
* revision of this file (as opposed to the latest version, the default).
* @param {string} [options.userProject] The ID of the project which will be
* billed for the request.
* @param {RemoveAclCallback} callback The callback function.
* @returns {Promise<RemoveAclResponse>}
*
* @example
* const storage = require('@google-cloud/storage')();
* const myBucket = storage.bucket('my-bucket');
* const myFile = myBucket.file('my-file');
*
* myBucket.acl.delete({
* entity: 'user-useremail@example.com'
* }, function(err, apiResponse) {});
*
* //-
* // For file ACL operations, you can also specify a `generation` property.
* //-
* myFile.acl.delete({
* entity: 'user-useremail@example.com',
* generation: 1
* }, function(err, apiResponse) {});
*
* //-
* // If the callback is omitted, we'll return a Promise.
* //-
* myFile.acl.delete().then(function(data) {
* const apiResponse = data[0];
* });
*
* @example <caption>include:samples/acl.js</caption>
* region_tag:storage_remove_bucket_owner
* Example of removing an owner from a bucket:
*
* @example <caption>include:samples/acl.js</caption>
* region_tag:storage_remove_bucket_default_owner
* Example of removing a default owner from a bucket:
*
* @example <caption>include:samples/acl.js</caption>
* region_tag:storage_remove_file_owner
* Example of removing an owner from a bucket:
*/
Acl.prototype.delete = function(options, callback) {
const query = {};
if (options.generation) {
query.generation = options.generation;
}
if (options.userProject) {
query.userProject = options.userProject;
}
this.request(
{
method: 'DELETE',
uri: '/' + encodeURIComponent(options.entity),
qs: query,
},
function(err, resp) {
callback(err, resp);
}
);
};
/**
* @typedef {array} GetAclResponse
* @property {object|object[]} 0 Single or array of Acl Objects.
* @property {object} 1 The full API response.
*/
/**
* @callback GetAclCallback
* @param {?Error} err Request error, if any.
* @param {object|object[]} acl Single or array of Acl Objects.
* @param {object} apiResponse The full API response.
*/
/**
* Get access controls on a {@link Bucket} or {@link File}. If
* an entity is omitted, you will receive an array of all applicable access
* controls.
*
* @see [BucketAccessControls: get API Documentation]{@link https://cloud.google.com/storage/docs/json_api/v1/bucketAccessControls/get}
* @see [ObjectAccessControls: get API Documentation]{@link https://cloud.google.com/storage/docs/json_api/v1/objectAccessControls/get}
*
* @param {object|function} [options] Configuration options. If you want to
* receive a list of all access controls, pass the callback function as the
* only argument.
* @param {string} [options.entity] Whose permissions will be fetched.
* @param {number} [options.generation] **File Objects Only** Select a specific
* revision of this file (as opposed to the latest version, the default).
* @param {string} [options.userProject] The ID of the project which will be
* billed for the request.
* @param {GetAclCallback} [callback] Callback function.
* @returns {Promise<GetAclResponse>}
*
* @example
* const storage = require('@google-cloud/storage')();
* const myBucket = storage.bucket('my-bucket');
* const myFile = myBucket.file('my-file');
*
* myBucket.acl.get({
* entity: 'user-useremail@example.com'
* }, function(err, aclObject, apiResponse) {});
*
* //-
* // Get all access controls.
* //-
* myBucket.acl.get(function(err, aclObjects, apiResponse) {
* // aclObjects = [
* // {
* // entity: 'user-useremail@example.com',
* // role: 'owner'
* // }
* // ]
* });
*
* //-
* // For file ACL operations, you can also specify a `generation` property.
* //-
* myFile.acl.get({
* entity: 'user-useremail@example.com',
* generation: 1
* }, function(err, aclObject, apiResponse) {});
*
* //-
* // If the callback is omitted, we'll return a Promise.
* //-
* myBucket.acl.get().then(function(data) {
* const aclObject = data[0];
* const apiResponse = data[1];
* });
*
* @example <caption>include:samples/acl.js</caption>
* region_tag:storage_print_file_acl
* Example of printing a file's ACL:
*
* @example <caption>include:samples/acl.js</caption>
* region_tag:storage_print_file_acl_for_user
* Example of printing a file's ACL for a specific user:
*
* @example <caption>include:samples/acl.js</caption>
* region_tag:storage_print_bucket_acl
* Example of printing a bucket's ACL:
*
* @example <caption>include:samples/acl.js</caption>
* region_tag:storage_print_bucket_acl_for_user
* Example of printing a bucket's ACL for a specific user:
*/
Acl.prototype.get = function(options, callback) {
const self = this;
let path = '';
const query = {};
if (is.fn(options)) {
callback = options;
options = null;
} else {
path = '/' + encodeURIComponent(options.entity);
if (options.generation) {
query.generation = options.generation;
}
if (options.userProject) {
query.userProject = options.userProject;
}
}
this.request(
{
uri: path,
qs: query,
},
function(err, resp) {
if (err) {
callback(err, null, resp);
return;
}
let results;
if (resp.items) {
results = arrify(resp.items).map(self.makeAclObject_);
} else {
results = self.makeAclObject_(resp);
}
callback(null, results, resp);
}
);
};
/**
* @typedef {array} UpdateAclResponse
* @property {object} 0 The updated Acl Objects.
* @property {object} 1 The full API response.
*/
/**
* @callback UpdateAclCallback
* @param {?Error} err Request error, if any.
* @param {object} acl The updated Acl Objects.
* @param {object} apiResponse The full API response.
*/
/**
* Update access controls on a {@link Bucket} or {@link File}.
*
* @see [BucketAccessControls: update API Documentation]{@link https://cloud.google.com/storage/docs/json_api/v1/bucketAccessControls/update}
* @see [ObjectAccessControls: update API Documentation]{@link https://cloud.google.com/storage/docs/json_api/v1/objectAccessControls/update}
*
* @param {object} options Configuration options.
* @param {string} options.entity Whose permissions will be updated.
* @param {string} options.role Permissions allowed for the defined entity.
* See {@link Storage.acl}.
* @param {number} [options.generation] **File Objects Only** Select a specific
* revision of this file (as opposed to the latest version, the default).
* @param {string} [options.userProject] The ID of the project which will be
* billed for the request.
* @param {UpdateAclCallback} [callback] Callback function.
* @returns {Promise<UpdateAclResponse>}
*
* @example
* const storage = require('@google-cloud/storage')();
* const myBucket = storage.bucket('my-bucket');
* const myFile = myBucket.file('my-file');
*
* const options = {
* entity: 'user-useremail@example.com',
* role: gcs.acl.WRITER_ROLE
* };
*
* myBucket.acl.update(options, function(err, aclObject, apiResponse) {});
*
* //-
* // For file ACL operations, you can also specify a `generation` property.
* //-
* myFile.acl.update({
* entity: 'user-useremail@example.com',
* role: gcs.acl.WRITER_ROLE,
* generation: 1
* }, function(err, aclObject, apiResponse) {});
*
* //-
* // If the callback is omitted, we'll return a Promise.
* //-
* myFile.acl.update(options).then(function(data) {
* const aclObject = data[0];
* const apiResponse = data[1];
* });
*/
Acl.prototype.update = function(options, callback) {
const self = this;
const query = {};
if (options.generation) {
query.generation = options.generation;
}
if (options.userProject) {
query.userProject = options.userProject;
}
this.request(
{
method: 'PUT',
uri: '/' + encodeURIComponent(options.entity),
qs: query,
json: {
role: options.role.toUpperCase(),
},
},
function(err, resp) {
if (err) {
callback(err, null, resp);
return;
}
callback(null, self.makeAclObject_(resp), resp);
}
);
};
/**
* Transform API responses to a consistent object format.
*
* @private
*/
Acl.prototype.makeAclObject_ = function(accessControlObject) {
const obj = {
entity: accessControlObject.entity,
role: accessControlObject.role,
};
if (accessControlObject.projectTeam) {
obj.projectTeam = accessControlObject.projectTeam;
}
return obj;
};
/**
* Patch requests up to the bucket's request object.
*
* @private
*
* @param {string} method Action.
* @param {string} path Request path.
* @param {*} query Request query object.
* @param {*} body Request body contents.
* @param {function} callback Callback function.
*/
Acl.prototype.request = function(reqOpts, callback) {
reqOpts.uri = this.pathPrefix + reqOpts.uri;
this.request_(reqOpts, callback);
};
/*! Developer Documentation
*
* All async methods (except for streams) will return a Promise in the event
* that a callback is omitted.
*/
common.util.promisifyAll(Acl);
module.exports = Acl;
/**
* Attach functionality to a {@link Storage.acl} instance. This will add an
* object for each role group (owners, readers, and writers), with each object
* containing methods to add or delete a type of entity.
*
* As an example, here are a few methods that are created.
*
* myBucket.acl.readers.deleteGroup('groupId', function(err) {});
*
* myBucket.acl.owners.addUser('email@example.com', function(err, acl) {});
*
* myBucket.acl.writers.addDomain('example.com', function(err, acl) {});
*
* @private
*/
function AclRoleAccessorMethods() {
AclRoleAccessorMethods.roles.forEach(this._assignAccessMethods.bind(this));
}
AclRoleAccessorMethods.accessMethods = ['add', 'delete'];
AclRoleAccessorMethods.entities = [
// Special entity groups that do not require further specification.
'allAuthenticatedUsers',
'allUsers',
// Entity groups that require specification, e.g. `user-email@example.com`.
'domain-',
'group-',
'project-',
'user-',
];
AclRoleAccessorMethods.roles = ['OWNER', 'READER', 'WRITER'];
AclRoleAccessorMethods.prototype._assignAccessMethods = function(role) {
const self = this;
const accessMethods = AclRoleAccessorMethods.accessMethods;
const entities = AclRoleAccessorMethods.entities;
const roleGroup = role.toLowerCase() + 's';
this[roleGroup] = entities.reduce(function(acc, entity) {
const isPrefix = entity.charAt(entity.length - 1) === '-';
accessMethods.forEach(function(accessMethod) {
let method = accessMethod + entity[0].toUpperCase() + entity.substr(1);
if (isPrefix) {
method = method.replace('-', '');
}
// Wrap the parent accessor method (e.g. `add` or `delete`) to avoid the
// more complex API of specifying an `entity` and `role`.
acc[method] = function(entityId, options, callback) {
let apiEntity;
if (is.fn(options)) {
callback = options;
options = {};
}
if (isPrefix) {
apiEntity = entity + entityId;
} else {
// If the entity is not a prefix, it is a special entity group that
// does not require further details. The accessor methods only accept
// a callback.
apiEntity = entity;
callback = entityId;
}
options = extend(
{
entity: apiEntity,
role: role,
},
options
);
const args = [options];
if (is.fn(callback)) {
args.push(callback);
}
return self[accessMethod].apply(self, args);
};
});
return acc;
}, {});
};
module.exports.AclRoleAccessorMethods = AclRoleAccessorMethods;

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,116 @@
/*!
* Copyright 2015 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
'use strict';
const common = require('@google-cloud/common');
const util = require('util');
/**
* Create a channel object to interact with a Cloud Storage channel.
*
* @see [Object Change Notification]{@link https://cloud.google.com/storage/docs/object-change-notification}
*
* @class
*
* @param {string} id The ID of the channel.
* @param {string} resourceId The resource ID of the channel.
*
* @example
* const storage = require('@google-cloud/storage')();
* const channel = storage.channel('id', 'resource-id');
*/
function Channel(storage, id, resourceId) {
const config = {
parent: storage,
baseUrl: '/channels',
// An ID shouldn't be included in the API requests.
// RE: https://github.com/GoogleCloudPlatform/google-cloud-node/issues/1145
id: '',
methods: {
// Only need `request`.
},
};
common.ServiceObject.call(this, config);
this.metadata.id = id;
this.metadata.resourceId = resourceId;
}
util.inherits(Channel, common.ServiceObject);
/**
* @typedef {array} StopResponse
* @property {object} 0 The full API response.
*/
/**
* @callback StopCallback
* @param {?Error} err Request error, if any.
* @param {object} apiResponse The full API response.
*/
/**
* Stop this channel.
*
* @param {StopCallback} [callback] Callback function.
* @returns {Promise<StopResponse>}
*
* @example
* const storage = require('@google-cloud/storage')();
* const channel = storage.channel('id', 'resource-id');
* channel.stop(function(err, apiResponse) {
* if (!err) {
* // Channel stopped successfully.
* }
* });
*
* //-
* // If the callback is omitted, we'll return a Promise.
* //-
* channel.stop().then(function(data) {
* const apiResponse = data[0];
* });
*/
Channel.prototype.stop = function(callback) {
callback = callback || common.util.noop;
this.request(
{
method: 'POST',
uri: '/stop',
json: this.metadata,
},
function(err, apiResponse) {
callback(err, apiResponse);
}
);
};
/*! Developer Documentation
*
* All async methods (except for streams) will return a Promise in the event
* that a callback is omitted.
*/
common.util.promisifyAll(Channel);
/**
* Reference to the {@link Channel} class.
* @name module:@google-cloud/storage.Channel
* @see Channel
*/
module.exports = Channel;

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,300 @@
/*!
* Copyright 2017 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
'use strict';
const arrify = require('arrify');
const common = require('@google-cloud/common');
const extend = require('extend');
const is = require('is');
/**
* Get and set IAM policies for your Cloud Storage bucket.
*
* @see [Cloud Storage IAM Management](https://cloud.google.com/storage/docs/access-control/iam#short_title_iam_management)
* @see [Granting, Changing, and Revoking Access](https://cloud.google.com/iam/docs/granting-changing-revoking-access)
* @see [IAM Roles](https://cloud.google.com/iam/docs/understanding-roles)
*
* @constructor Iam
* @mixin
*
* @param {Bucket} bucket The parent instance.
* @example
* const storage = require('@google-cloud/storage')();
* const bucket = storage.bucket('my-bucket');
* // bucket.iam
*/
function Iam(bucket) {
this.request_ = bucket.request.bind(bucket);
this.resourceId_ = 'buckets/' + bucket.id;
}
/**
* @typedef {object} GetPolicyRequest
* @property {string} userProject The ID of the project which will be billed for
* the request.
*/
/**
* @typedef {array} GetPolicyResponse
* @property {object} 0 The policy.
* @property {object} 1 The full API response.
*/
/**
* @callback GetPolicyCallback
* @param {?Error} err Request error, if any.
* @param {object} acl The policy.
* @param {object} apiResponse The full API response.
*/
/**
* Get the IAM policy.
*
* @param {GetPolicyRequest} [options] Request options.
* @param {GetPolicyCallback} [callback] Callback function.
* @returns {Promise<GetPolicyResponse>}
*
* @see [Buckets: setIamPolicy API Documentation]{@link https://cloud.google.com/storage/docs/json_api/v1/buckets/getIamPolicy}
*
* @example
* const storage = require('@google-cloud/storage')();
* const bucket = storage.bucket('my-bucket');
* bucket.iam.getPolicy(function(err, policy, apiResponse) {});
*
* //-
* // If the callback is omitted, we'll return a Promise.
* //-
* bucket.iam.getPolicy().then(function(data) {
* const policy = data[0];
* const apiResponse = data[1];
* });
*
* @example <caption>include:samples/iam.js</caption>
* region_tag:storage_view_bucket_iam_members
* Example of retrieving a bucket's IAM policy:
*/
Iam.prototype.getPolicy = function(options, callback) {
if (is.fn(options)) {
callback = options;
options = {};
}
this.request_(
{
uri: '/iam',
qs: options,
},
callback
);
};
/**
* @typedef {array} SetPolicyResponse
* @property {object} 0 The policy.
* @property {object} 1 The full API response.
*/
/**
* @callback SetPolicyCallback
* @param {?Error} err Request error, if any.
* @param {object} acl The policy.
* @param {object} apiResponse The full API response.
*/
/**
* Set the IAM policy.
*
* @throws {Error} If no policy is provided.
*
* @param {object} policy The policy.
* @param {array} policy.bindings Bindings associate members with roles.
* @param {string} [policy.etag] Etags are used to perform a read-modify-write.
* @param {object} [options] Configuration opbject.
* @param {string} [options.userProject] The ID of the project which will be
* billed for the request.
* @param {SetPolicyCallback} callback Callback function.
* @returns {Promise<SetPolicyResponse>}
*
* @see [Buckets: setIamPolicy API Documentation]{@link https://cloud.google.com/storage/docs/json_api/v1/buckets/setIamPolicy}
* @see [IAM Roles](https://cloud.google.com/iam/docs/understanding-roles)
*
* @example
* const storage = require('@google-cloud/storage')();
* const bucket = storage.bucket('my-bucket');
*
* const myPolicy = {
* bindings: [
* {
* role: 'roles/storage.admin',
* members: ['serviceAccount:myotherproject@appspot.gserviceaccount.com']
* }
* ]
* };
*
* bucket.iam.setPolicy(myPolicy, function(err, policy, apiResponse) {});
*
* //-
* // If the callback is omitted, we'll return a Promise.
* //-
* bucket.iam.setPolicy(myPolicy).then(function(data) {
* const policy = data[0];
* const apiResponse = data[1];
* });
*
* @example <caption>include:samples/iam.js</caption>
* region_tag:storage_add_bucket_iam_member
* Example of adding to a bucket's IAM policy:
*
* @example <caption>include:samples/iam.js</caption>
* region_tag:storage_remove_bucket_iam_member
* Example of removing from a bucket's IAM policy:
*/
Iam.prototype.setPolicy = function(policy, options, callback) {
if (!is.object(policy)) {
throw new Error('A policy object is required.');
}
if (is.fn(options)) {
callback = options;
options = {};
}
this.request_(
{
method: 'PUT',
uri: '/iam',
json: extend(
{
resourceId: this.resourceId_,
},
policy
),
qs: options,
},
callback
);
};
/**
* @typedef {array} TestIamPermissionsResponse
* @property {object[]} 0 A subset of permissions that the caller is allowed.
* @property {object} 1 The full API response.
*/
/**
* @callback TestIamPermissionsCallback
* @param {?Error} err Request error, if any.
* @param {object[]} acl A subset of permissions that the caller is allowed.
* @param {object} apiResponse The full API response.
*/
/**
* Test a set of permissions for a resource.
*
* @throws {Error} If permissions are not provided.
*
* @param {string|string[]} permissions The permission(s) to test for.
* @param {object} [options] Configuration object.
* @param {string} [options.userProject] The ID of the project which will be
* billed for the request.
* @param {TestIamPermissionsCallback} [callback] Callback function.
* @returns {Promise<TestIamPermissionsResponse>}
*
* @see [Buckets: testIamPermissions API Documentation]{@link https://cloud.google.com/storage/docs/json_api/v1/buckets/testIamPermissions}
*
* @example
* const storage = require('@google-cloud/storage')();
* const bucket = storage.bucket('my-bucket');
*
* //-
* // Test a single permission.
* //-
* const test = 'storage.buckets.delete';
*
* bucket.iam.testPermissions(test, function(err, permissions, apiResponse) {
* console.log(permissions);
* // {
* // "storage.buckets.delete": true
* // }
* });
*
* //-
* // Test several permissions at once.
* //-
* const tests = [
* 'storage.buckets.delete',
* 'storage.buckets.get'
* ];
*
* bucket.iam.testPermissions(tests, function(err, permissions) {
* console.log(permissions);
* // {
* // "storage.buckets.delete": false,
* // "storage.buckets.get": true
* // }
* });
*
* //-
* // If the callback is omitted, we'll return a Promise.
* //-
* bucket.iam.testPermissions(test).then(function(data) {
* const permissions = data[0];
* const apiResponse = data[1];
* });
*/
Iam.prototype.testPermissions = function(permissions, options, callback) {
if (!is.array(permissions) && !is.string(permissions)) {
throw new Error('Permissions are required.');
}
if (is.fn(options)) {
callback = options;
options = {};
}
options = extend(
{
permissions: arrify(permissions),
},
options
);
this.request_(
{
uri: '/iam/testPermissions',
qs: options,
useQuerystring: true,
},
function(err, resp) {
if (err) {
callback(err, null, resp);
return;
}
const availablePermissions = arrify(resp.permissions);
const permissionsHash = permissions.reduce(function(acc, permission) {
acc[permission] = availablePermissions.indexOf(permission) > -1;
return acc;
}, {});
callback(null, permissionsHash, resp);
}
);
};
/*! Developer Documentation
*
* All async methods (except for streams) will return a Promise in the event
* that a callback is omitted.
*/
common.util.promisifyAll(Iam);
module.exports = Iam;

View File

@ -0,0 +1,591 @@
/**
* Copyright 2014-2017 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
'use strict';
const arrify = require('arrify');
const common = require('@google-cloud/common');
const extend = require('extend');
const util = require('util');
const Bucket = require('./bucket.js');
const Channel = require('./channel.js');
const File = require('./file.js');
/**
* @typedef {object} ClientConfig
* @property {string} [projectId] The project ID from the Google Developer's
* Console, e.g. 'grape-spaceship-123'. We will also check the environment
* variable `GCLOUD_PROJECT` for your project ID. If your app is running in
* an environment which supports {@link https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application Application Default Credentials},
* your project ID will be detected automatically.
* @property {string} [keyFilename] Full path to the a .json, .pem, or .p12 key
* downloaded from the Google Developers Console. If you provide a path to a
* JSON file, the `projectId` option above is not necessary. NOTE: .pem and
* .p12 require you to specify the `email` option as well.
* @property {string} [email] Account email address. Required when using a .pem
* or .p12 keyFilename.
* @property {object} [credentials] Credentials object.
* @property {string} [credentials.client_email]
* @property {string} [credentials.private_key]
* @property {boolean} [autoRetry=true] Automatically retry requests if the
* response is related to rate limits or certain intermittent server errors.
* We will exponentially backoff subsequent requests by default.
* @property {number} [maxRetries=3] Maximum number of automatic retries
* attempted before returning the error.
* @property {Constructor} [promise] Custom promise module to use instead of
* native Promises.
*/
/*! Developer Documentation
*
* Invoke this method to create a new Storage object bound with pre-determined
* configuration options. For each object that can be created (e.g., a bucket),
* there is an equivalent static and instance method. While they are classes,
* they can be instantiated without use of the `new` keyword.
*/
/**
* <h4>ACLs</h4>
* Cloud Storage uses access control lists (ACLs) to manage object and
* bucket access. ACLs are the mechanism you use to share files with other users
* and allow other users to access your buckets and files.
*
* To learn more about ACLs, read this overview on
* [Access Control](https://cloud.google.com/storage/docs/access-control).
*
* @see [Cloud Storage overview]{@link https://cloud.google.com/storage/docs/overview}
* @see [Access Control]{@link https://cloud.google.com/storage/docs/access-control}
*
* @class
* @hideconstructor
*
* @example <caption>Create a client that uses Application Default Credentials (ADC)</caption>
* const Storage = require('@google-cloud/storage');
* const storage = new Storage();
*
* @example <caption>Create a client with explicit credentials</caption>
* const Storage = require('@google-cloud/storage');
* const storage = new Storage({
* projectId: 'your-project-id',
* keyFilename: '/path/to/keyfile.json'
* });
*
* @param {ClientConfig} [options] Configuration options.
*/
function Storage(options) {
if (!(this instanceof Storage)) {
return new Storage(options);
}
options = common.util.normalizeArguments(this, options);
const config = {
baseUrl: 'https://www.googleapis.com/storage/v1',
projectIdRequired: false,
scopes: [
'https://www.googleapis.com/auth/iam',
'https://www.googleapis.com/auth/cloud-platform',
'https://www.googleapis.com/auth/devstorage.full_control',
],
packageJson: require('../package.json'),
};
common.Service.call(this, config, options);
}
util.inherits(Storage, common.Service);
/**
* Cloud Storage uses access control lists (ACLs) to manage object and
* bucket access. ACLs are the mechanism you use to share objects with other
* users and allow other users to access your buckets and objects.
*
* This object provides constants to refer to the three permission levels that
* can be granted to an entity:
*
* - `gcs.acl.OWNER_ROLE` - ("OWNER")
* - `gcs.acl.READER_ROLE` - ("READER")
* - `gcs.acl.WRITER_ROLE` - ("WRITER")
*
* @see [About Access Control Lists]{@link https://cloud.google.com/storage/docs/access-control/lists}
*
* @name Storage.acl
* @type {object}
* @property {string} OWNER_ROLE
* @property {string} READER_ROLE
* @property {string} WRITER_ROLE
*
* @example
* const storage = require('@google-cloud/storage')();
* const albums = storage.bucket('albums');
*
* //-
* // Make all of the files currently in a bucket publicly readable.
* //-
* const options = {
* entity: 'allUsers',
* role: storage.acl.READER_ROLE
* };
*
* albums.acl.add(options, function(err, aclObject) {});
*
* //-
* // Make any new objects added to a bucket publicly readable.
* //-
* albums.acl.default.add(options, function(err, aclObject) {});
*
* //-
* // Grant a user ownership permissions to a bucket.
* //-
* albums.acl.add({
* entity: 'user-useremail@example.com',
* role: storage.acl.OWNER_ROLE
* }, function(err, aclObject) {});
*
* //-
* // If the callback is omitted, we'll return a Promise.
* //-
* albums.acl.add(options).then(function(data) {
* const aclObject = data[0];
* const apiResponse = data[1];
* });
*/
Storage.acl = {
OWNER_ROLE: 'OWNER',
READER_ROLE: 'READER',
WRITER_ROLE: 'WRITER',
};
/**
* Reference to {@link Storage.acl}.
*
* @name Storage#acl
* @see Storage.acl
*/
Storage.prototype.acl = Storage.acl;
/**
* Get a reference to a Cloud Storage bucket.
*
* @param {string} name Name of the bucket.
* @param {object} [options] Configuration object.
* @param {string} [options.kmsKeyName] A Cloud KMS key that will be used to
* encrypt objects inserted into this bucket, if no encryption method is
* specified.
* @param {string} [options.userProject] User project to be billed for all
* requests made from this Bucket object.
* @returns {Bucket}
* @see Bucket
*
* @example
* const storage = require('@google-cloud/storage')();
* const albums = storage.bucket('albums');
* const photos = storage.bucket('photos');
*/
Storage.prototype.bucket = function(name, options) {
if (!name) {
throw new Error('A bucket name is needed to use Cloud Storage.');
}
return new Bucket(this, name, options);
};
/**
* Reference a channel to receive notifications about changes to your bucket.
*
* @param {string} id The ID of the channel.
* @param {string} resourceId The resource ID of the channel.
* @returns {Channel}
* @see Channel
*
* @example
* const storage = require('@google-cloud/storage')();
* const channel = storage.channel('id', 'resource-id');
*/
Storage.prototype.channel = function(id, resourceId) {
return new Channel(this, id, resourceId);
};
/**
* Metadata to set for the bucket.
*
* @typedef {object} CreateBucketRequest
* @property {boolean} [coldline=false] Specify the storage class as Coldline.
* @property {boolean} [dra=false] Specify the storage class as Durable Reduced
* Availability.
* @property {boolean} [multiRegional=false] Specify the storage class as
* Multi-Regional.
* @property {boolean} [nearline=false] Specify the storage class as Nearline.
* @property {boolean} [regional=false] Specify the storage class as Regional.
* @property {boolean} [requesterPays=false] **Early Access Testers Only**
* Force the use of the User Project metadata field to assign operational
* costs when an operation is made on a Bucket and its objects.
* @property {string} [userProject] The ID of the project which will be billed
* for the request.
*/
/**
* @typedef {array} CreateBucketResponse
* @property {Bucket} 0 The new {@link Bucket}.
* @property {object} 1 The full API response.
*/
/**
* @callback CreateBucketCallback
* @param {?Error} err Request error, if any.
* @param {Bucket} bucket The new {@link Bucket}.
* @param {object} apiResponse The full API response.
*/
/**
* Create a bucket.
*
* Cloud Storage uses a flat namespace, so you can't create a bucket with
* a name that is already in use. For more information, see
* [Bucket Naming Guidelines](https://cloud.google.com/storage/docs/bucketnaming.html#requirements).
*
* @see [Buckets: insert API Documentation]{@link https://cloud.google.com/storage/docs/json_api/v1/buckets/insert}
* @see [Storage Classes]{@link https://cloud.google.com/storage/docs/storage-classes}
*
* @param {string} name Name of the bucket to create.
* @param {CreateBucketRequest} [metadata] Metadata to set for the bucket.
* @param {CreateBucketCallback} [callback] Callback function.
* @returns {Promise<CreateBucketResponse>}
* @throws {Error} If a name is not provided.
* @see Bucket#create
*
* @example
* const storage = require('@google-cloud/storage')();
* const callback = function(err, bucket, apiResponse) {
* // `bucket` is a Bucket object.
* };
*
* storage.createBucket('new-bucket', callback);
*
* //-
* // Create a bucket in a specific location and region. <em>See the <a
* // href="https://cloud.google.com/storage/docs/json_api/v1/buckets/insert">
* // Official JSON API docs</a> for complete details on the `location` option.
* // </em>
* //-
* const metadata = {
* location: 'US-CENTRAL1',
* regional: true
* };
*
* storage.createBucket('new-bucket', metadata, callback);
*
* //-
* // Enable versioning on a new bucket.
* //-
* const metadata = {
* versioning: {
* enabled: true
* }
* };
*
* storage.createBucket('new-bucket', metadata, callback);
*
* //-
* // If the callback is omitted, we'll return a Promise.
* //-
* storage.createBucket('new-bucket').then(function(data) {
* const bucket = data[0];
* const apiResponse = data[1];
* });
*
* @example <caption>include:samples/buckets.js</caption>
* region_tag:storage_create_bucket
* Another example:
*/
Storage.prototype.createBucket = function(name, metadata, callback) {
const self = this;
if (!name) {
throw new Error('A name is required to create a bucket.');
}
if (!callback) {
callback = metadata;
metadata = {};
}
const body = extend({}, metadata, {
name: name,
});
const storageClasses = {
coldline: 'COLDLINE',
dra: 'DURABLE_REDUCED_AVAILABILITY',
multiRegional: 'MULTI_REGIONAL',
nearline: 'NEARLINE',
regional: 'REGIONAL',
};
Object.keys(storageClasses).forEach(function(storageClass) {
if (body[storageClass]) {
body.storageClass = storageClasses[storageClass];
delete body[storageClass];
}
});
if (body.requesterPays) {
body.billing = {
requesterPays: body.requesterPays,
};
delete body.requesterPays;
}
const query = {
project: this.projectId,
};
if (body.userProject) {
query.userProject = body.userProject;
delete body.userProject;
}
this.request(
{
method: 'POST',
uri: '/b',
qs: query,
json: body,
},
function(err, resp) {
if (err) {
callback(err, null, resp);
return;
}
const bucket = self.bucket(name);
bucket.metadata = resp;
callback(null, bucket, resp);
}
);
};
/**
* Query object for listing buckets.
*
* @typedef {object} GetBucketsRequest
* @property {boolean} [autoPaginate=true] Have pagination handled
* automatically.
* @property {number} [maxApiCalls] Maximum number of API calls to make.
* @property {number} [maxResults] Maximum number of items plus prefixes to
* return.
* @property {string} [pageToken] A previously-returned page token
* representing part of the larger set of results to view.
* @property {string} [userProject] The ID of the project which will be billed
* for the request.
*/
/**
* @typedef {array} GetBucketsResponse
* @property {Bucket[]} 0 Array of {@link Bucket} instances.
*/
/**
* @callback GetBucketsCallback
* @param {?Error} err Request error, if any.
* @param {Bucket[]} buckets Array of {@link Bucket} instances.
*/
/**
* Get Bucket objects for all of the buckets in your project.
*
* @see [Buckets: list API Documentation]{@link https://cloud.google.com/storage/docs/json_api/v1/buckets/list}
*
* @param {GetBucketsRequest} [query] Query object for listing buckets.
* @param {GetBucketsCallback} [callback] Callback function.
* @returns {Promise<GetBucketsResponse>}
*
* @example
* const storage = require('@google-cloud/storage')();
* storage.getBuckets(function(err, buckets) {
* if (!err) {
* // buckets is an array of Bucket objects.
* }
* });
*
* //-
* // To control how many API requests are made and page through the results
* // manually, set `autoPaginate` to `false`.
* //-
* const callback = function(err, buckets, nextQuery, apiResponse) {
* if (nextQuery) {
* // More results exist.
* storage.getBuckets(nextQuery, callback);
* }
*
* // The `metadata` property is populated for you with the metadata at the
* // time of fetching.
* buckets[0].metadata;
*
* // However, in cases where you are concerned the metadata could have
* // changed, use the `getMetadata` method.
* buckets[0].getMetadata(function(err, metadata, apiResponse) {});
* };
*
* storage.getBuckets({
* autoPaginate: false
* }, callback);
*
* //-
* // If the callback is omitted, we'll return a Promise.
* //-
* storage.getBuckets().then(function(data) {
* const buckets = data[0];
* });
*
* @example <caption>include:samples/buckets.js</caption>
* region_tag:storage_list_buckets
* Another example:
*/
Storage.prototype.getBuckets = function(query, callback) {
const self = this;
if (!callback) {
callback = query;
query = {};
}
query.project = query.project || this.projectId;
this.request(
{
uri: '/b',
qs: query,
},
function(err, resp) {
if (err) {
callback(err, null, null, resp);
return;
}
const buckets = arrify(resp.items).map(function(bucket) {
const bucketInstance = self.bucket(bucket.id);
bucketInstance.metadata = bucket;
return bucketInstance;
});
let nextQuery = null;
if (resp.nextPageToken) {
nextQuery = extend({}, query, {pageToken: resp.nextPageToken});
}
callback(null, buckets, nextQuery, resp);
}
);
};
/**
* Get {@link Bucket} objects for all of the buckets in your project as
* a readable object stream.
*
* @method Storage#getBucketsStream
* @param {GetBucketsRequest} [query] Query object for listing buckets.
* @returns {ReadableStream} A readable stream that emits {@link Bucket} instances.
*
* @example
* storage.getBucketsStream()
* .on('error', console.error)
* .on('data', function(bucket) {
* // bucket is a Bucket object.
* })
* .on('end', function() {
* // All buckets retrieved.
* });
*
* //-
* // If you anticipate many results, you can end a stream early to prevent
* // unnecessary processing and API requests.
* //-
* storage.getBucketsStream()
* .on('data', function(bucket) {
* this.end();
* });
*/
Storage.prototype.getBucketsStream = common.paginator.streamify('getBuckets');
/*! Developer Documentation
*
* These methods can be auto-paginated.
*/
common.paginator.extend(Storage, 'getBuckets');
/*! Developer Documentation
*
* All async methods (except for streams) will return a Promise in the event
* that a callback is omitted.
*/
common.util.promisifyAll(Storage, {
exclude: ['bucket', 'channel'],
});
/**
* {@link Bucket} class.
*
* @name Storage.Bucket
* @see Bucket
* @type {Constructor}
*/
Storage.Bucket = Bucket;
/**
* {@link Channel} class.
*
* @name Storage.Channel
* @see Channel
* @type {Constructor}
*/
Storage.Channel = Channel;
/**
* {@link File} class.
*
* @name Storage.File
* @see File
* @type {Constructor}
*/
Storage.File = File;
/**
* The default export of the `@google-cloud/storage` package is the
* {@link Storage} class, which also serves as a factory function which produces
* {@link Storage} instances.
*
* See {@link Storage} and {@link ClientConfig} for client methods and
* configuration options.
*
* @module {Storage} @google-cloud/storage
* @alias nodejs-storage
*
* @example <caption>Install the client library with <a href="https://www.npmjs.com/">npm</a>:</caption>
* npm install --save @google-cloud/storage
*
* @example <caption>Import the client library</caption>
* const Storage = require('@google-cloud/storage');
*
* @example <caption>Create a client that uses <a href="https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application">Application Default Credentials (ADC)</a>:</caption>
* const storage = new Storage();
*
* @example <caption>Create a client with <a href="https://cloud.google.com/docs/authentication/production#obtaining_and_providing_service_account_credentials_manually">explicit credentials</a>:</caption>
* const storage = new Storage({
* projectId: 'your-project-id',
* keyFilename: '/path/to/keyfile.json'
* });
*
* @example <caption>include:samples/quickstart.js</caption>
* region_tag:storage_quickstart
* Full quickstart example:
*/
module.exports = Storage;

View File

@ -0,0 +1,350 @@
/*!
* Copyright 2017 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
'use strict';
const common = require('@google-cloud/common');
const is = require('is');
const util = require('util');
/**
* A Notification object is created from your {@link Bucket} object using
* {@link Bucket#notification}. Use it to interact with Cloud Pub/Sub
* notifications.
*
* @see [Cloud Pub/Sub Notifications for Google Cloud Storage]{@link https://cloud.google.com/storage/docs/pubsub-notifications}
*
* @class
* @hideconstructor
*
* @param {Bucket} bucket The bucket instance this notification is attached to.
* @param {string} id The ID of the notification.
*
* @example
* const storage = require('@google-cloud/storage')();
* const myBucket = storage.bucket('my-bucket');
*
* const notification = myBucket.notification('1');
*/
function Notification(bucket, id) {
const methods = {
/**
* Creates a notification subscription for the bucket.
*
* @see [Notifications: insert]{@link https://cloud.google.com/storage/docs/json_api/v1/notifications/insert}
*
* @param {Topic|string} topic The Cloud PubSub topic to which this
* subscription publishes. If the project ID is omitted, the current
* project ID will be used.
*
* Acceptable formats are:
* - `projects/grape-spaceship-123/topics/my-topic`
*
* - `my-topic`
* @param {CreateNotificationRequest} [options] Metadata to set for
* the notification.
* @param {CreateNotificationCallback} [callback] Callback function.
* @returns {Promise<CreateNotificationResponse>}
* @throws {Error} If a valid topic is not provided.
*
* @example
* const storage = require('@google-cloud/storage')();
* const myBucket = storage.bucket('my-bucket');
* const notification = myBucket.notification('1');
*
* notification.create(function(err, notification, apiResponse) {
* if (!err) {
* // The notification was created successfully.
* }
* });
*
* //-
* // If the callback is omitted, we'll return a Promise.
* //-
* notification.create().then(function(data) {
* const notification = data[0];
* const apiResponse = data[1];
* });
*/
create: true,
/**
* @typedef {array} NotificationExistsResponse
* @property {boolean} 0 Whether the notification exists or not.
*/
/**
* @callback NotificationExistsCallback
* @param {?Error} err Request error, if any.
* @param {boolean} exists Whether the notification exists or not.
*/
/**
* Check if the notification exists.
*
* @param {NotificationExistsCallback} [callback] Callback function.
* @returns {Promise<NotificationExistsResponse>}
*
* @example
* const storage = require('@google-cloud/storage')();
* const myBucket = storage.bucket('my-bucket');
* const notification = myBucket.notification('1');
*
* notification.exists(function(err, exists) {});
*
* //-
* // If the callback is omitted, we'll return a Promise.
* //-
* notification.exists().then(function(data) {
* const exists = data[0];
* });
*/
exists: true,
};
common.ServiceObject.call(this, {
parent: bucket,
baseUrl: '/notificationConfigs',
id: id.toString(),
createMethod: bucket.createNotification.bind(bucket),
methods: methods,
});
}
util.inherits(Notification, common.ServiceObject);
/**
* @typedef {array} DeleteNotificationResponse
* @property {object} 0 The full API response.
*/
/**
* @callback DeleteNotificationCallback
* @param {?Error} err Request error, if any.
* @param {object} apiResponse The full API response.
*/
/**
* Permanently deletes a notification subscription.
*
* @see [Notifications: delete API Documentation]{@link https://cloud.google.com/storage/docs/json_api/v1/notifications/delete}
*
* @param {object} [options] Configuration options.
* @param {string} [options.userProject] The ID of the project which will be
* billed for the request.
* @param {DeleteNotificationCallback} [callback] Callback function.
* @returns {Promise<DeleteNotificationResponse>}
*
* @example
* const storage = require('@google-cloud/storage')();
* const myBucket = storage.bucket('my-bucket');
* const notification = myBucket.notification('1');
*
* notification.delete(function(err, apiResponse) {});
*
* //-
* // If the callback is omitted, we'll return a Promise.
* //-
* notification.delete().then(function(data) {
* const apiResponse = data[0];
* });
*
* @example <caption>include:samples/notifications.js</caption>
* region_tag:storage_delete_notification
* Another example:
*/
Notification.prototype.delete = function(options, callback) {
if (is.fn(options)) {
callback = options;
options = {};
}
this.request(
{
method: 'DELETE',
uri: '',
qs: options,
},
callback || common.util.noop
);
};
/**
* @typedef {array} GetNotificationResponse
* @property {Notification} 0 The {@link Notification}
* @property {object} 1 The full API response.
*/
/**
* @callback GetNotificationCallback
* @param {?Error} err Request error, if any.
* @param {Notification} notification The {@link Notification}.
* @param {object} apiResponse The full API response.
*/
/**
* Get a notification and its metadata if it exists.
*
* @see [Notifications: get API Documentation]{@link https://cloud.google.com/storage/docs/json_api/v1/notifications/get}
*
* @param {object} [options] Configuration options.
* See {@link Bucket#createNotification} for create options.
* @param {boolean} [options.autoCreate] Automatically create the object if
* it does not exist. Default: `false`.
* @param {string} [options.userProject] The ID of the project which will be
* billed for the request.
* @param {GetNotificationCallback} [callback] Callback function.
* @return {Promise<GetNotificationCallback>}
*
* @example
* const storage = require('@google-cloud/storage')();
* const myBucket = storage.bucket('my-bucket');
* const notification = myBucket.notification('1');
*
* notification.get(function(err, notification, apiResponse) {
* // `notification.metadata` has been populated.
* });
*
* //-
* // If the callback is omitted, we'll return a Promise.
* //-
* notification.get().then(function(data) {
* const notification = data[0];
* const apiResponse = data[1];
* });
*/
Notification.prototype.get = function(options, callback) {
const self = this;
if (is.fn(options)) {
callback = options;
options = {};
}
const autoCreate = options.autoCreate;
delete options.autoCreate;
function onCreate(err, notification, apiResponse) {
if (err) {
if (err.code === 409) {
self.get(options, callback);
return;
}
callback(err, null, apiResponse);
return;
}
callback(null, notification, apiResponse);
}
this.getMetadata(options, function(err, metadata) {
if (err) {
if (err.code === 404 && autoCreate) {
const args = [];
if (!is.empty(options)) {
args.push(options);
}
args.push(onCreate);
self.create.apply(self, args);
return;
}
callback(err, null, metadata);
return;
}
callback(null, self, metadata);
});
};
/**
* @typedef {array} GetNotificationMetadataResponse
* @property {object} 0 The notification metadata.
* @property {object} 1 The full API response.
*/
/**
* @callback GetNotificationMetadataCallback
* @param {?Error} err Request error, if any.
* @param {object} files The notification metadata.
* @param {object} apiResponse The full API response.
*/
/**
* Get the notification's metadata.
*
* @see [Notifications: get API Documentation]{@link https://cloud.google.com/storage/docs/json_api/v1/notifications/get}
*
* @param {object} [options] Configuration options.
* @param {string} [options.userProject] The ID of the project which will be
* billed for the request.
* @param {GetNotificationMetadataCallback} [callback] Callback function.
* @returns {Promise<GetNotificationMetadataResponse>}
*
* @example
* const storage = require('@google-cloud/storage')();
* const myBucket = storage.bucket('my-bucket');
* const notification = myBucket.notification('1');
*
* notification.getMetadata(function(err, metadata, apiResponse) {});
*
* //-
* // If the callback is omitted, we'll return a Promise.
* //-
* notification.getMetadata().then(function(data) {
* const metadata = data[0];
* const apiResponse = data[1];
* });
*
* @example <caption>include:samples/notifications.js</caption>
* region_tag:storage_notifications_get_metadata
* Another example:
*/
Notification.prototype.getMetadata = function(options, callback) {
const self = this;
if (is.fn(options)) {
callback = options;
options = {};
}
this.request(
{
uri: '',
qs: options,
},
function(err, resp) {
if (err) {
callback(err, null, resp);
return;
}
self.metadata = resp;
callback(null, self.metadata, resp);
}
);
};
/*! Developer Documentation
*
* All async methods (except for streams) will return a Promise in the event
* that a callback is omitted.
*/
common.util.promisifyAll(Notification);
/**
* Reference to the {@link Notification} class.
* @name module:@google-cloud/storage.Notification
* @see Notification
*/
module.exports = Notification;