From: Viacheslav Dubeyko <slava@dubeyko.com>
To: ceph-devel@vger.kernel.org
Cc: idryomov@gmail.com, linux-fsdevel@vger.kernel.org,
pdonnell@redhat.com, amarkuze@redhat.com, Slava.Dubeyko@ibm.com,
slava@dubeyko.com
Subject: [PATCH] ceph: fix overflowed value issue in ceph_submit_write()
Date: Wed, 4 Jun 2025 15:41:06 -0700 [thread overview]
Message-ID: <20250604224106.396310-1-slava@dubeyko.com> (raw)
From: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>
The Coverity Scan service has detected overflowed value
issue in ceph_submit_write() [1]. The CID 1646339 defect
contains explanation: "The overflowed value due to
arithmetic on constants is too small or unexpectedly
negative, causing incorrect computations.
In ceph_submit_write: Integer overflow occurs in
arithmetic on constant operands (CWE-190)".
This patch adds a check ceph_wbc->locked_pages on
equality to zero and it exits function if it has
zero value. Also, it introduces a processed_pages
variable with the goal of assigning the number of
processed pages and checking this number on
equality to zero. The check of processed_pages
variable on equality to zero should protect from
overflowed value of index that selects page in
ceph_wbc->pages[index] array.
[1] https://scan5.scan.coverity.com/#/project-view/64304/10063?selectedIssue=1646339
Signed-off-by: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>
---
fs/ceph/addr.c | 22 +++++++++++++++-------
1 file changed, 15 insertions(+), 7 deletions(-)
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index b95c4cb21c13..afbb7aba283e 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -1411,6 +1411,7 @@ int ceph_submit_write(struct address_space *mapping,
bool caching = ceph_is_cache_enabled(inode);
u64 offset;
u64 len;
+ unsigned processed_pages;
unsigned i;
new_request:
@@ -1438,6 +1439,9 @@ int ceph_submit_write(struct address_space *mapping,
BUG_ON(IS_ERR(req));
}
+ if (ceph_wbc->locked_pages == 0)
+ return -EINVAL;
+
page = ceph_wbc->pages[ceph_wbc->locked_pages - 1];
BUG_ON(len < ceph_fscrypt_page_offset(page) + thp_size(page) - offset);
@@ -1474,6 +1478,7 @@ int ceph_submit_write(struct address_space *mapping,
len = 0;
ceph_wbc->data_pages = ceph_wbc->pages;
ceph_wbc->op_idx = 0;
+ processed_pages = 0;
for (i = 0; i < ceph_wbc->locked_pages; i++) {
u64 cur_offset;
@@ -1517,19 +1522,22 @@ int ceph_submit_write(struct address_space *mapping,
ceph_set_page_fscache(page);
len += thp_size(page);
+ processed_pages++;
}
ceph_fscache_write_to_cache(inode, offset, len, caching);
if (ceph_wbc->size_stable) {
len = min(len, ceph_wbc->i_size - offset);
- } else if (i == ceph_wbc->locked_pages) {
+ } else if (processed_pages > 0 &&
+ processed_pages == ceph_wbc->locked_pages) {
/* writepages_finish() clears writeback pages
* according to the data length, so make sure
* data length covers all locked pages */
u64 min_len = len + 1 - thp_size(page);
+ unsigned index = processed_pages - 1;
len = get_writepages_data_length(inode,
- ceph_wbc->pages[i - 1],
+ ceph_wbc->pages[index],
offset);
len = max(len, min_len);
}
@@ -1554,17 +1562,17 @@ int ceph_submit_write(struct address_space *mapping,
BUG_ON(ceph_wbc->op_idx + 1 != req->r_num_ops);
ceph_wbc->from_pool = false;
- if (i < ceph_wbc->locked_pages) {
+ if (processed_pages < ceph_wbc->locked_pages) {
BUG_ON(ceph_wbc->num_ops <= req->r_num_ops);
ceph_wbc->num_ops -= req->r_num_ops;
- ceph_wbc->locked_pages -= i;
+ ceph_wbc->locked_pages -= processed_pages;
/* allocate new pages array for next request */
ceph_wbc->data_pages = ceph_wbc->pages;
__ceph_allocate_page_array(ceph_wbc, ceph_wbc->locked_pages);
- memcpy(ceph_wbc->pages, ceph_wbc->data_pages + i,
+ memcpy(ceph_wbc->pages, ceph_wbc->data_pages + processed_pages,
ceph_wbc->locked_pages * sizeof(*ceph_wbc->pages));
- memset(ceph_wbc->data_pages + i, 0,
+ memset(ceph_wbc->data_pages + processed_pages, 0,
ceph_wbc->locked_pages * sizeof(*ceph_wbc->pages));
} else {
BUG_ON(ceph_wbc->num_ops != req->r_num_ops);
@@ -1576,7 +1584,7 @@ int ceph_submit_write(struct address_space *mapping,
ceph_osdc_start_request(&fsc->client->osdc, req);
req = NULL;
- wbc->nr_to_write -= i;
+ wbc->nr_to_write -= processed_pages;
if (ceph_wbc->pages)
goto new_request;
--
2.49.0
reply other threads:[~2025-06-04 22:41 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250604224106.396310-1-slava@dubeyko.com \
--to=slava@dubeyko.com \
--cc=Slava.Dubeyko@ibm.com \
--cc=amarkuze@redhat.com \
--cc=ceph-devel@vger.kernel.org \
--cc=idryomov@gmail.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=pdonnell@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).